[R] Problem on estimating fish species

2015-05-25 Thread Ivone Figueiredo
Hi
I am trying to estimate the proportion of species landed by species. But I
always get error messages.

Can you please help me? Thanks Ivone

 I am trying to use R2Winbugs to

skate.4 <- bugs (skateA.data, inits=skatesA.inits, skateA.parameters,
model1.file,   n.chains=1, n.iter=50,  bugs.directory="C:/Program
Files (x86)/OpenBUGS/OpenBUGS323/", program=c("OpenBUGS"), debug=TRUE)

skateA.data <- list(n=length(y_A), y_A =y_A, Species_A= Species_A,
n.species_A=n.species_A)

skateA.parameters <- c ("mu","b.species", "sigma.species",  "sigma.epsilon")

skatesA.inits <- function (){
  list (mu=rnorm(0.5),  b.species=rnorm(0.2), sigma.species=runif(1),
sigma.epsilon=runif(1))}


The model is

model {
  for (i in 1:n){
y[i] ~ dpois (lambda[i])
lambda[i] <- exp(mu+b.species[Species_A[i]] + epsilon[i])
epsilon[i]  ~ dnorm (0, tau.epsilon)
  }
  mu ~ dnorm (0, .0001)
  mu.adj <- mu +  mean(b.species[])
  tau.epsilon <- pow(sigma.epsilon, -2)
  sigma.epsilon ~ dunif (0, 100)
  b.species[1] <-0

  for (j in 2:n.species_A){
b.species[j] ~ dnorm (0, tau.species)
  }
  tau.species <- pow(sigma.species, -2)
  sigma.species ~ dunif (0, 100)
}

 the data are:

n  <-  54

n.species_A <- 3





y <- c(6.80,10.20 124.20, 6.350, 0.00,63.20, 0.00, 0.00,86.850, 3.60,
2.550,60.950,0.00, 0.00,87.20, 2.60, 0.00,58.10, 3.360,34.990,62.140, 0.00,
0.00,63.90,0.00, 0.00 112.660, 0.00, 0.00 121.630, 0.00, 3.580 101.770,
0.00, 0.00 158.00,0.00, 0.00 123.279, 0.00, 0.00 124.255, 3.170,
0.00,27.840, 5.930, 0.00 174.435,0.00, 0.00,53.525, 0.00, 4.858,68.945)



Species <- c("RJC”, “RJE”, “RJH”, “RJC”, “RJE”, “RJH”, “RJC”, “RJE”, “RJH”,
“RJC”, “RJE”, “RJH”, “RJC”, “RJE”, “RJH”, “RJC", "RJE”, “RJH”, “RJC”,
“RJE”, “RJH”, “RJC”, “RJE”, “RJH”, “RJC”, “RJE”, “RJH”, “RJC”, “RJE”,
“RJH”, “RJC”, “RJE", "RJH”, “RJC”, “RJE”, “RJH”, “RJC”, “RJE”, “RJH”,
“RJC”, “RJE”, “RJH”, “RJC”, “RJE”, “RJH”, “RJC”, “RJE”, “RJH", "RJC”,
“RJE”, “RJH”, “RJC”, “RJE”, “RJH")

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] How to extract the standardized residuals tests from the summary report of fGarch

2015-05-25 Thread w...@szu.edu.cn
I am using the Rmarkdown to produce a  html slides automatically, and I want to 
known
How to extract the standardized residuals tests section from the summary report?


Here are the R-code:


>library("fGarch") 
>N = 200
>x.vec = as.vector(garchSim(garchSpec(rseed = 1985), n = N)[,1])
>fit=garchFit(~ garch(1,1), data = x.vec, trace = FALSE)


> summary(fit)


Title:
 GARCH Modelling 


Call:
 garchFit(formula = ~garch(1, 1), data = x.vec, trace = FALSE) 


Mean and Variance Equation:
 data ~ garch(1, 1)

 [data = x.vec]


Conditional Distribution:
 norm 


Coefficient(s):
mu   omega  alpha1   beta1  
3.5418e-05  1.0819e-06  8.8855e-02  8.1200e-01  


Std. Errors:
 based on Hessian 


Error Analysis:
Estimate  Std. Error  t value Pr(>|t|)
mu 3.542e-05   2.183e-040.1620.871
omega  1.082e-06   1.051e-061.0300.303
alpha1 8.885e-02   5.450e-021.6300.103
beta1  8.120e-01   1.242e-016.538 6.25e-11 ***
---
Signif. codes:  0 ��***�� 0.001 ��**�� 0.01 ��*�� 0.05 ��.�� 0.1 �� �� 1


Log Likelihood:
 861.9494normalized:  4.309747 


Description:
 Mon May 25 09:10:52 2015 by user: WENSQ 




Standardised Residuals Tests:
Statistic p-Value  
 Jarque-Bera Test   RChi^2  1.114092  0.5728988
 Shapiro-Wilk Test  RW  0.9932317 0.4911085
 Ljung-Box Test RQ(10)  7.303961  0.6964713
 Ljung-Box Test RQ(15)  8.712829  0.8920477
 Ljung-Box Test RQ(20)  9.766984  0.972203 
 Ljung-Box Test R^2  Q(10)  11.88455  0.2928573
 Ljung-Box Test R^2  Q(15)  14.93927  0.4558006
 Ljung-Box Test R^2  Q(20)  20.08937  0.4523516
 LM Arch Test   RTR^2   11.57234  0.480607 


Information Criterion Statistics:
  AIC   BIC   SIC  HQIC 
-8.579494 -8.513527 -8.580273 -8.552798 




Dr.  WEN SONG-QIAO
SHENZHEN UNIVERSITY
SHENZHEN,CHINA
Email��w...@szu.edu.cn
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Vincentizing Reaction Time data in R

2015-05-25 Thread Gabriel WEINDEL

Hi John,

Sorry for the response delay.

I found a way to do it in a slight different way : 
http://www.nicebread.de/comparing-all-quantiles-of-two-distributions-simultaneously/


You're right with the application. I just put some comments in your post.

Thank you for your time. I will now use the quantile comparison for my 
statistic test, and perform vincentization later for my thesis result. 
If I create something useful I will share it on this topic.


Gabriel


Do I  understand the idea behind 'vincentizing' reaction times?
I don't want to work through the Ratcliff, (1979)  paper unless I must.

Let's say we have a subject , s1, with 50 rt scores.
We sort the scores from high to low (or low to high , it makes no difference) 
then we split the 50 scores into quantiles (let's say deciles) and calculate 
the mean/decile?

Repeat for each subject.  We now have the 'vincentized' means.

That's it?


Yes, the point is to get rid of the shape blindness of, for example 
ANOVA sample mean, by using quantiles to also reduce influence of outliers.


Example, of what I understand for just for one subject (s1)

# install plyr package if not already installed
install.packages("plyr")
#===

library(plyr)

# create some sciency looking sample data
rtmatter   <- c (seq(0.50 , 1.50, 0.01), seq(0.55, 1.55,  0.01) )
str(rtmatter)  # verify it looks sciencey

# create one subject
s1  <-  sample(rtmatter, 50, replace = TRUE)

# calculate 'vincentized' means for s1
s1  <-  sort(s1)
c1  <-  cut(s1, 10, right = TRUE)


You cut the distribution in 10, the use of vincentization fix the cut to 
n ≥ bins. So a formula should be used to compute it for each set of data



ss1  <-  data.frame(c1,  s1)
vince1   <-   ddply(ss1, .(c1), summarize, decile.mean = mean(s1) )
vince1


That's right too.


John Kane
Kingston ON Canada



-Original Message-
From: gabriel.wein...@gmail.com
Sent: Thu, 21 May 2015 17:50:02 +0200
To: jrkrid...@inbox.com, yishinlin...@gmail.com, gunter.ber...@gene.com,
djnordl...@frontier.com
Subject: Re: [R] Vincentizing Reaction Time data in R

Bert : Thank you for your advice, it would be a little bit difficult to
do it for my master thesis but, if I want to go further with a PhD
thesis (and I do want), I would probably follow your advice and get in
touch with a statistician.

Yishin : Thank you very much for the references, I will definitively
read the papers you quote. I'm already a little bit aware of the misuses
possible with the vincentization in particular thanks to the paper of
Rouder and Speckman (2004) and it seems to fit with my design. No
problem if you want to keep the code but I have to tell you that it's
our first semester using R and the teacher surely didn't thought that we
will run out of available code with our experiment. Like John guessed
the purpose of the course was to give a first view of R to get over the
temptation of SPSS, my bad if I want to avoid biased statistics like
sample mean ANOVA's on RT.

Dan : Thank you for your tip, this sure will help but I'm quiet at the
beginning of my R skills so I hardly trust myself to do it on my own,
but I can sure give it a try.

John : I had the same assumption but my research director warned me that
I might run out of time for my first presentation by doing so but fairly
enough for my master thesis. But again like I said to Dan I'm quiet
concerned by my actual R skill.

Anyway I have to say that I'm really glad to see how much help you can
get by using the r-help mailing-list.

Regards,
Gabriel

Le 21/05/2015 15:52, John Kane a écrit :

In line

John Kane
Kingston ON Canada



-Original Message-
From: yishinlin...@gmail.com
Sent: Thu, 21 May 2015 10:13:54 +0800
To: gabriel.wein...@gmail.com
Subject: Re: [R] Vincentizing Reaction Time data in R

On Wed, 20 May 2015 18:13:17 +0800,
Hi Gabriel,

As far as I could recall, there isn't an R package that has explicitly
implemented "vincentization". You definitively can find some code
segments/functions that have implemented "vincentize" on the web. But
you
should verify if they do exactly what you wish to do.  If you could
look
at the question from percentile/quantle perspective, it would not take
you too much time to realise that they are similar.  I would suggest
you
to read, as John Kane suggested, Prof. Ratcliff's 1979 paper.  Another
paper that may be very helpful is Prof van Zandt's 2000 RT paper.

However, you should be aware that there are some different
implementation
of "vincentization", and it is debatable, if not problematic, to use
it,
rather than other more general quantile methods. It would help you to
understand not only how to do vincentization, but also why/why not if
you
could read papers from Jeff Rouder's as well as from Heathcote's and
Brown's lab.

Sorry that I hesitate to give you the code, because this looks like
part
of your course works.  It would be more rewarding for you, if you could
figure out by yourself.

Yishin


While

[R] Issues with loading csv file

2015-05-25 Thread Shivi82
HI All,

I am trying to load an CSV file into the R project. the code for the same
is:
mydata<- read.csv("Jan-May Data.csv", header=TRUE)

however with this I am getting the below error message:
/*Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
  cannot open file 'Jan-May Data.csv': No such file or directory*/

I am under the impression that R automatically pulls the data from the
working directory and we do not have to add the location where the file is
saved. Please let me know if my understanding is correct and help on the
error as well.

Please note the csv file is already saved in the WD.
Thank you, Shivi



--
View this message in context: 
http://r.789695.n4.nabble.com/Issues-with-loading-csv-file-tp4707637.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Issues with loading csv file

2015-05-25 Thread Kevin E. Thorpe

On 05/25/2015 08:19 AM, Shivi82 wrote:

HI All,

I am trying to load an CSV file into the R project. the code for the same
is:
mydata<- read.csv("Jan-May Data.csv", header=TRUE)

however with this I am getting the below error message:
/*Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
   cannot open file 'Jan-May Data.csv': No such file or directory*/

I am under the impression that R automatically pulls the data from the
working directory and we do not have to add the location where the file is
saved. Please let me know if my understanding is correct and help on the
error as well.

Please note the csv file is already saved in the WD.
Thank you, Shivi



The error message suggests that R is not finding the file. Are you sure 
it is in R's working directory? Try explicitly setting the working 
directory to the directory (folder) where your CSV file is. There is a 
menu option for this.


Kevin

--
Kevin E. Thorpe
Head of Biostatistics,  Applied Health Research Centre (AHRC)
Li Ka Shing Knowledge Institute of St. Michael's
Assistant Professor, Dalla Lana School of Public Health
University of Toronto
email: kevin.tho...@utoronto.ca  Tel: 416.864.5776  Fax: 416.864.3016

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] data for pass though OAS viewport question

2015-05-25 Thread Glenn Schultz

Attached is dput of the pass through OAS 
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] data for pass though OAS viewport question

2015-05-25 Thread Bert Gunter
No it ain't. Most attachments don't make it through. dput() it
directly into the email body -- that's the point: it's plain text.

-- Bert

Bert Gunter
Genentech Nonclinical Biostatistics
(650) 467-7374

"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
Clifford Stoll




On Mon, May 25, 2015 at 6:04 AM, Glenn Schultz  wrote:
> Attached is dput of the pass through OAS
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Issues with loading csv file

2015-05-25 Thread J Robertson-Burns

I suggest two commands to diagnose
the problem:

getwd()  # show the working directory of R

This is navigation tool #17.
http://www.burns-stat.com/r-navigation-tools/

list.files()  # show the files in the working directory

You can copy and paste file names to avoid
typing mistakes.  (Not that I've ever made any.)

Pat

On 25/05/2015 13:19, Shivi82 wrote:

HI All,

I am trying to load an CSV file into the R project. the code for the same
is:
mydata<- read.csv("Jan-May Data.csv", header=TRUE)

however with this I am getting the below error message:
/*Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
   cannot open file 'Jan-May Data.csv': No such file or directory*/

I am under the impression that R automatically pulls the data from the
working directory and we do not have to add the location where the file is
saved. Please let me know if my understanding is correct and help on the
error as well.

Please note the csv file is already saved in the WD.
Thank you, Shivi



--
View this message in context: 
http://r.789695.n4.nabble.com/Issues-with-loading-csv-file-tp4707637.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vincentizing Reaction Time data in R

2015-05-25 Thread John Kane

Thanks Gabriel, 
That new method you found looks interesting even if it is a long way from 
anything I am likely to be doing.

Re my code below.  It looks like  vincentization is actually straight-forward.  
I used bins = 10 since it was a convenient number.  I imagine if one was to 
actually turn this into a function it would not be that hard to come up with 
some formula to calculate bin size although statisticians may be wincing when 
they read that last remark.

I played a little more with the idea and it really looks pretty easy to  
vincentizatise a data.frame.  

John Kane
Kingston ON Canada


> -Original Message-
> From: gabriel.wein...@gmail.com
> Sent: Mon, 25 May 2015 11:55:04 +0200
> To: jrkrid...@inbox.com
> Subject: Re: [R] Vincentizing Reaction Time data in R
> 
> Hi John,
> 
> Sorry for the response delay.
> 
> I found a way to do it in a slight different way :
> http://www.nicebread.de/comparing-all-quantiles-of-two-distributions-simultaneously/
> 
> You're right with the application. I just put some comments in your post.
> 
> Thank you for your time. I will now use the quantile comparison for my
> statistic test, and perform vincentization later for my thesis result.
> If I create something useful I will share it on this topic.
> 
> Gabriel
> 
>> Do I  understand the idea behind 'vincentizing' reaction times?
>> I don't want to work through the Ratcliff, (1979)  paper unless I must.
>> 
>> Let's say we have a subject , s1, with 50 rt scores.
>> We sort the scores from high to low (or low to high , it makes no
>> difference) then we split the 50 scores into quantiles (let's say
>> deciles) and calculate the mean/decile?
>> 
>> Repeat for each subject.  We now have the 'vincentized' means.
>> 
>> That's it?
> 
> Yes, the point is to get rid of the shape blindness of, for example
> ANOVA sample mean, by using quantiles to also reduce influence of
> outliers.
>> 
>> Example, of what I understand for just for one subject (s1)
>> 
>> # install plyr package if not already installed
>> install.packages("plyr")
>> #===
>> 
>> library(plyr)
>> 
>> # create some sciency looking sample data
>> rtmatter   <- c (seq(0.50 , 1.50, 0.01), seq(0.55, 1.55,  0.01) )
>> str(rtmatter)  # verify it looks sciencey
>> 
>> # create one subject
>> s1  <-  sample(rtmatter, 50, replace = TRUE)
>> 
>> # calculate 'vincentized' means for s1
>> s1  <-  sort(s1)
>> c1  <-  cut(s1, 10, right = TRUE)
> 
> You cut the distribution in 10, the use of vincentization fix the cut to
> n ≥ bins. So a formula should be used to compute it for each set of data
> 
>> ss1  <-  data.frame(c1,  s1)
>> vince1   <-   ddply(ss1, .(c1), summarize, decile.mean = mean(s1) )
>> vince1
>> 
> That's right too.
>> 
>> John Kane
>> Kingston ON Canada
>> 
>> 
>>> -Original Message-
>>> From: gabriel.wein...@gmail.com
>>> Sent: Thu, 21 May 2015 17:50:02 +0200
>>> To: jrkrid...@inbox.com, yishinlin...@gmail.com,
>>> gunter.ber...@gene.com,
>>> djnordl...@frontier.com
>>> Subject: Re: [R] Vincentizing Reaction Time data in R
>>> 
>>> Bert : Thank you for your advice, it would be a little bit difficult to
>>> do it for my master thesis but, if I want to go further with a PhD
>>> thesis (and I do want), I would probably follow your advice and get in
>>> touch with a statistician.
>>> 
>>> Yishin : Thank you very much for the references, I will definitively
>>> read the papers you quote. I'm already a little bit aware of the
>>> misuses
>>> possible with the vincentization in particular thanks to the paper of
>>> Rouder and Speckman (2004) and it seems to fit with my design. No
>>> problem if you want to keep the code but I have to tell you that it's
>>> our first semester using R and the teacher surely didn't thought that
>>> we
>>> will run out of available code with our experiment. Like John guessed
>>> the purpose of the course was to give a first view of R to get over the
>>> temptation of SPSS, my bad if I want to avoid biased statistics like
>>> sample mean ANOVA's on RT.
>>> 
>>> Dan : Thank you for your tip, this sure will help but I'm quiet at the
>>> beginning of my R skills so I hardly trust myself to do it on my own,
>>> but I can sure give it a try.
>>> 
>>> John : I had the same assumption but my research director warned me
>>> that
>>> I might run out of time for my first presentation by doing so but
>>> fairly
>>> enough for my master thesis. But again like I said to Dan I'm quiet
>>> concerned by my actual R skill.
>>> 
>>> Anyway I have to say that I'm really glad to see how much help you can
>>> get by using the r-help mailing-list.
>>> 
>>> Regards,
>>> Gabriel
>>> 
>>> Le 21/05/2015 15:52, John Kane a écrit :
 In line
 
 John Kane
 Kingston ON Canada
 
 
> -Original Message-
> From: yishinlin...@gmail.com
> Sent: Thu, 21 May 2015 10:13:54 +0800
> To: gabriel.wein...@gmail.com
> Subject: Re: [R] Vincentizing Reaction Time data in

Re: [R] Issues with loading csv file

2015-05-25 Thread Michael Dewey
You could try list.files() which will tell you which files R thinks are 
in your working directory.


On 25/05/2015 13:19, Shivi82 wrote:

HI All,

I am trying to load an CSV file into the R project. the code for the same
is:
mydata<- read.csv("Jan-May Data.csv", header=TRUE)

however with this I am getting the below error message:
/*Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
   cannot open file 'Jan-May Data.csv': No such file or directory*/

I am under the impression that R automatically pulls the data from the
working directory and we do not have to add the location where the file is
saved. Please let me know if my understanding is correct and help on the
error as well.

Please note the csv file is already saved in the WD.
Thank you, Shivi



--
View this message in context: 
http://r.789695.n4.nabble.com/Issues-with-loading-csv-file-tp4707637.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Michael
http://www.dewey.myzen.co.uk/home.html

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Trouble with SPI package

2015-05-25 Thread Miller Andres Ruiz Sanchez
Hello,

I write to ask you about an error that I have when I use the script below.
I'm working with monthly  precipitation  data for the period between 1990
and 1998.

I really thanks your help.

___

> require(SPEI)
> require(spi)
> require(sm)

> dir()
[1] "IndexScript.R"  "PradoCorre.txt" "spi.txt"

> Prado=read.table("PradoCorre.txt", head=TRUE, dec=".")

> write.table(Prado,file="PradoCorre.txt",quote=FALSE,row.names=TRUE)

> spi(3,"PradoCorre.txt",1990,1998)
Error en data[i, ] : subíndice fuera de  los límites


> Prado
   Months X1990  X1991 X1992 X1993 X1994 X1995 X1996 X1997 X1998
1 Jan   0.0   0.00   0.0  22.3   0.0   0.0  15.4   0.0   0.0
2 Feb   0.0  11.00   0.0   0.0   0.0   0.0   0.0   2.5   0.0
3 Mar   0.0   8.70   0.0  13.1   0.3   0.4  34.3   0.0   3.8
4 Apr  52.0  32.20  96.8  70.0  61.4 251.0  21.0  31.0  18.0
5 May 130.0  34.11 249.1 348.4 211.0 141.5 144.8  36.3 314.3
6 Jun 102.0 142.20 188.5  70.6  24.1 116.9  90.6 159.4 126.9
7 Jul  86.0  98.00  80.6  89.0  39.9 228.0 234.0  26.2  27.8
8 Aug 135.0  82.78  76.3 173.5 245.9 370.9  44.2  51.8 162.4
9 Sep 132.0 103.30 216.4 214.3 120.0 177.7  84.3 132.3 403.9
10Oct 432.0 245.60 180.6 253.2 101.2 356.1 241.9  92.8 117.8
11Nov  42.6  24.03  48.3  64.3 155.1  15.0 120.0  14.2   0.0
12Dec 109.3   1.60   0.0  26.0   0.0   9.3   0.0   0.0   0.0
>







Cordialmente

*Miller Ruiz*
Ingeniero Agrónomo
Auxiliar de Investigación
Área de Geomática
Cenipalma - Zona Norte
Cel: 3164807973
Fundación - Magdalena

-- 
[image: Fedepalma] 

  

El presente mensaje de datos como sus archivos adjuntos se consideran 
información confidencial y de valor estratégico para la Federación; motivo 
por el cual sólo podrá  ser empleada por su exclusivo destinatario, según 
las indicaciones impartidas por el remitente y dada la naturaleza de la 
misma. En consecuencia, cualquier uso, explotación, reproducción,
modificación, distribución, puesta a disposición entre otras posibilidades 
diferentes a las autorizadas, se entenderán expresamente prohibidas. Si 
Usted No es el destinatario, deber  eliminar completamente el mensaje y, en
lo posible, notificar al remitente del mismo. Es responsabilidad del 
destinatario de este mensaje comprobar que el mensaje de datos y sus 
adjuntos no representan un riesgo informático para su propio sistema.

La Federación ha tomado las medidas adecuadas para tratar de prevenir la 
transmisión de virus y programas malignos, no obstante, no se hace 
responsable por su eventual transmisión por este conducto. La Federación 
 no acepta responsabilidad alguna por eventuales daños o alteraciones
derivadas de la recepción o uso del presente mensaje.

-- 


El presente mensaje de datos como sus archivos adjuntos se consideran 
información confidencial y de valor estratégico para la Federación; motivo 
por el cual sólo podrá  ser empleada por su exclusivo destinatario, según 
las indicaciones impartidas por el remitente y dada la naturaleza de la 
misma. En consecuencia, cualquier uso, explotación, reproducción,
modificación, distribución, puesta a disposición entre otras posibilidades 
diferentes a las autorizadas, se entenderán expresamente prohibidas. Si 
Usted No es el destinatario, deber  eliminar completamente el mensaje y, en
lo posible, notificar al remitente del mismo. Es responsabilidad del 
destinatario de este mensaje comprobar que el mensaje de datos y sus 
adjuntos no representan un riesgo informático para su propio sistema.

La Federación ha tomado las medidas adecuadas para tratar de prevenir la 
transmisión de virus y programas malignos, no obstante, no se hace 
responsable por su eventual transmisión por este conducto. La Federación 
 no acepta responsabilidad alguna por eventuales daños o alteraciones
derivadas de la recepción o uso del presente mensaje.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Trouble with SPI package

2015-05-25 Thread Jeff Newmiller
Why are you overwriting your input file after you read it in? This seems likely 
to end up corrupting your input data if you make a mistake.

For instance, when you read a file into a data frame that has pure numeric 
values in the header line, the default behaviour is to convert those numeric 
values to valid labels, which means that they must start with a letter (in this 
case X). 

I have never used the spi function, but its help file example for the filename 
argument suggests that the year numbers should not have letters in them. 

You should be able to use a text editor to repair your input file, and remove 
the line that overwrites it from your code.
---
Jeff NewmillerThe .   .  Go Live...
DCN:Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
--- 
Sent from my phone. Please excuse my brevity.

On May 25, 2015 8:56:32 AM PDT, Miller Andres Ruiz Sanchez 
 wrote:
>Hello,
>
>I write to ask you about an error that I have when I use the script
>below.
>I'm working with monthly  precipitation  data for the period between
>1990
>and 1998.
>
>I really thanks your help.
>
>___
>
>> require(SPEI)
>> require(spi)
>> require(sm)
>
>> dir()
>[1] "IndexScript.R"  "PradoCorre.txt" "spi.txt"
>
>> Prado=read.table("PradoCorre.txt", head=TRUE, dec=".")
>
>> write.table(Prado,file="PradoCorre.txt",quote=FALSE,row.names=TRUE)
>
>> spi(3,"PradoCorre.txt",1990,1998)
>Error en data[i, ] : subíndice fuera de  los límites
>
>
>> Prado
>   Months X1990  X1991 X1992 X1993 X1994 X1995 X1996 X1997 X1998
>1 Jan   0.0   0.00   0.0  22.3   0.0   0.0  15.4   0.0   0.0
>2 Feb   0.0  11.00   0.0   0.0   0.0   0.0   0.0   2.5   0.0
>3 Mar   0.0   8.70   0.0  13.1   0.3   0.4  34.3   0.0   3.8
>4 Apr  52.0  32.20  96.8  70.0  61.4 251.0  21.0  31.0  18.0
>5 May 130.0  34.11 249.1 348.4 211.0 141.5 144.8  36.3 314.3
>6 Jun 102.0 142.20 188.5  70.6  24.1 116.9  90.6 159.4 126.9
>7 Jul  86.0  98.00  80.6  89.0  39.9 228.0 234.0  26.2  27.8
>8 Aug 135.0  82.78  76.3 173.5 245.9 370.9  44.2  51.8 162.4
>9 Sep 132.0 103.30 216.4 214.3 120.0 177.7  84.3 132.3 403.9
>10Oct 432.0 245.60 180.6 253.2 101.2 356.1 241.9  92.8 117.8
>11Nov  42.6  24.03  48.3  64.3 155.1  15.0 120.0  14.2   0.0
>12Dec 109.3   1.60   0.0  26.0   0.0   9.3   0.0   0.0   0.0
>>
>
>
>
>
>
>
>
>Cordialmente
>
>*Miller Ruiz*
>Ingeniero Agrónomo
>Auxiliar de Investigación
>Área de Geomática
>Cenipalma - Zona Norte
>Cel: 3164807973
>Fundación - Magdalena
>
>-- 
>[image: Fedepalma] 
>
>  
>
>El presente mensaje de datos como sus archivos adjuntos se consideran 
>información confidencial y de valor estratégico para la Federación;
>motivo 
>por el cual sólo podrá  ser empleada por su exclusivo destinatario,
>según 
>las indicaciones impartidas por el remitente y dada la naturaleza de la
>
>misma. En consecuencia, cualquier uso, explotación, reproducción,
>modificación, distribución, puesta a disposición entre otras
>posibilidades 
>diferentes a las autorizadas, se entenderán expresamente prohibidas. Si
>
>Usted No es el destinatario, deber  eliminar completamente el mensaje
>y, en
>lo posible, notificar al remitente del mismo. Es responsabilidad del 
>destinatario de este mensaje comprobar que el mensaje de datos y sus 
>adjuntos no representan un riesgo informático para su propio sistema.
>
>La Federación ha tomado las medidas adecuadas para tratar de prevenir
>la 
>transmisión de virus y programas malignos, no obstante, no se hace 
>responsable por su eventual transmisión por este conducto. La
>Federación 
> no acepta responsabilidad alguna por eventuales daños o alteraciones
>derivadas de la recepción o uso del presente mensaje.
>
>-- 
>
>
>El presente mensaje de datos como sus archivos adjuntos se consideran 
>información confidencial y de valor estratégico para la Federación;
>motivo 
>por el cual sólo podrá  ser empleada por su exclusivo destinatario,
>según 
>las indicaciones impartidas por el remitente y dada la naturaleza de la
>
>misma. En consecuencia, cualquier uso, explotación, reproducción,
>modificación, distribución, puesta a disposición entre otras
>posibilidades 
>dife

Re: [R] How to extract the standardized residuals tests from the summary report of fGarch

2015-05-25 Thread David Winsemius

> On May 24, 2015, at 7:25 PM, w...@szu.edu.cn wrote:
> 
> I am using the Rmarkdown to produce a  html slides automatically, and I want 
> to known
> How to extract the standardized residuals tests section from the summary 
> report?
> 

Probbly the easiest way is with capture.output. I looked at the returned value 
from the summary method for that S4 object and it was NULL, so the function 
appears to simply be acting via side-effects of cat to the console. You could 
look at the code with:

showMethods(class="fGARCH", f="summary", includeDefs=TRUE)

— 
David.
> 
> Here are the R-code:
> 
> 
>> library("fGarch") 
>> N = 200
>> x.vec = as.vector(garchSim(garchSpec(rseed = 1985), n = N)[,1])
>> fit=garchFit(~ garch(1,1), data = x.vec, trace = FALSE)
> 
> 
>> summary(fit)
> 
> 
> Title:
> GARCH Modelling 
> 
> 
> Call:
> garchFit(formula = ~garch(1, 1), data = x.vec, trace = FALSE) 
> 
> 
> Mean and Variance Equation:
> data ~ garch(1, 1)
> 
> [data = x.vec]
> 
> 
> Conditional Distribution:
> norm 
> 
> 
> Coefficient(s):
>mu   omega  alpha1   beta1  
> 3.5418e-05  1.0819e-06  8.8855e-02  8.1200e-01  
> 
> 
> Std. Errors:
> based on Hessian 
> 
> 
> Error Analysis:
>Estimate  Std. Error  t value Pr(>|t|)
> mu 3.542e-05   2.183e-040.1620.871
> omega  1.082e-06   1.051e-061.0300.303
> alpha1 8.885e-02   5.450e-021.6300.103
> beta1  8.120e-01   1.242e-016.538 6.25e-11 ***
> ---
> Signif. codes:  0 กฎ***กฏ 0.001 กฎ**กฏ 0.01 กฎ*กฏ 0.05 กฎ.กฏ 0.1 กฎ กฏ 1
> 
> 
> Log Likelihood:
> 861.9494normalized:  4.309747 
> 
> 
> Description:
> Mon May 25 09:10:52 2015 by user: WENSQ 
> 
> 
> 
> 
> Standardised Residuals Tests:
>Statistic p-Value  
> Jarque-Bera Test   RChi^2  1.114092  0.5728988
> Shapiro-Wilk Test  RW  0.9932317 0.4911085
> Ljung-Box Test RQ(10)  7.303961  0.6964713
> Ljung-Box Test RQ(15)  8.712829  0.8920477
> Ljung-Box Test RQ(20)  9.766984  0.972203 
> Ljung-Box Test R^2  Q(10)  11.88455  0.2928573
> Ljung-Box Test R^2  Q(15)  14.93927  0.4558006
> Ljung-Box Test R^2  Q(20)  20.08937  0.4523516
> LM Arch Test   RTR^2   11.57234  0.480607 
> 
> 
> Information Criterion Statistics:
>  AIC   BIC   SIC  HQIC 
> -8.579494 -8.513527 -8.580273 -8.552798 
> 
> 
> 
> 
> Dr.  WEN SONG-QIAO
> SHENZHEN UNIVERSITY
> SHENZHEN,CHINA
> Emailฃบw...@szu.edu.cn
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Issues with loading csv file

2015-05-25 Thread WRAY NICHOLAS
Something you could try is to put a small csv file into a location and set
the word to that and see whether it's finding it
eg
setwd("C:/Users/Shivi/Documents/")
open this file, stick a csv doc and see whether R will read it

Nick

On 25 May 2015 at 13:19, Shivi82  wrote:

> HI All,
>
> I am trying to load an CSV file into the R project. the code for the same
> is:
> mydata<- read.csv("Jan-May Data.csv", header=TRUE)
>
> however with this I am getting the below error message:
> /*Error in file(file, "rt") : cannot open the connection
> In addition: Warning message:
> In file(file, "rt") :
>   cannot open file 'Jan-May Data.csv': No such file or directory*/
>
> I am under the impression that R automatically pulls the data from the
> working directory and we do not have to add the location where the file is
> saved. Please let me know if my understanding is correct and help on the
> error as well.
>
> Please note the csv file is already saved in the WD.
> Thank you, Shivi
>
>
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Issues-with-loading-csv-file-tp4707637.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] run a calculation function over time fields, ordered and grouped by variables

2015-05-25 Thread gavinr
I’ve got some transit data relating to bus stops for a GIS data set.  Each
row represents one stop on a route.  For each record I have the start time
of the route, a sequence in which a bus stops, the time the bus arrives at
the first stop and the time taken to get to each of the stops from the last
one in the sequence.  Not all sequences of stops starts with the number 1,
some may start with a higher number.
I need to make a new variable which has the time the bus arrives at each
stop by using the start time from the stop with the lowest sequence number,
to populate all of the arrival times for each stop in each route. 

I have a very simple example below with just three routes and a few stops in
each.  My actual data set has a few million rows.  I've also created a
version of the data set I'm aiming to get.

There are two problems here.  Firstly getting the data into the correct
format to do the calculations with 
durations, and secondly running a function over the data set to obtain the
times.
It is the durations that are critical not the date, so using the POSIX
methods doesn’t really seem appropriate here.  Ultimately the times are
going to be used in a route solver in an ArcSDE geodatabase.  I tried to use
strptime to format my times, but could not get them into a data.frame as
presumably they are a list.  In this example I’ve left them as strings. 

Any help is much appreciated.

#create four columns with route id, stop sequence interval time and route
start time
ssq<-c(3,4,5,6,7,8,9,1,2,3,4,2,3,4,5,6,7,8)
tint<-c("00:00","00:12","00:03","00:06","00:09","00:02","00:04","00:00","00:08","00:10","00:10","00:00","00:02","00:04","00:08","00:02","00:01","00:01")
tst<-c(rep("18:20",7),rep("10:50",4),rep("16:15",7))
rtid<-c(rep("a",7),rep("b",4),rep("c",7))
df<-data.frame(cbind(ssq,tint,tst,rtid))
df  

#correct data set should look like this
tarr<-c("18:20","18:32","18:35","18:41","18:50","18:52","18:56","10:50","10:58","11:08","11:18","16:15","16:17","16:21","16:29","16:31","16:32","16:33")
df2<-cbind(df,tarr)
df2





--
View this message in context: 
http://r.789695.n4.nabble.com/run-a-calculation-function-over-time-fields-ordered-and-grouped-by-variables-tp4707655.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] How do I move the horizontal axis in a plot so that it starts at the zero of the vertical axis?

2015-05-25 Thread andrejfavia
Greetings.

[1] How do I move the horizontal axis in a plot so that it starts at the
zero of the vertical axis? I tried using ylim=c(0, 2) but it doesn't
work. I'd also like to keep the "0.0" along the vertical axis and not
have it vanish.

[2] Also, how do I change the data points to five-pointed stars?

[3] Also, how do I know where threads posted to this email address
appear on the Nabble forum, so that I can post to it and have my posts
approved?


Example:

x <- c(-2.5, -1.3, 0.6, 0.8, 2.1)
y <- c(0.3, 1.9, 1.4, 0.7, 1.1)

plot(x, y, ylim=c(0, 2))

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How do I move the horizontal axis in a plot so that it starts at the zero of the vertical axis?

2015-05-25 Thread Jim Lemon
Hi andrejfavia,
You probably want:

plot(x,y,ylim=c(0,2),yaxs="i")

for question [1], Have a look at the ms.polygram and my.symbols help
pages in the TeachingDemos package for question [2] and I haven't got
a clue about question [3].

Jim


On Tue, May 26, 2015 at 6:39 AM,   wrote:
> Greetings.
>
> [1] How do I move the horizontal axis in a plot so that it starts at the
> zero of the vertical axis? I tried using ylim=c(0, 2) but it doesn't
> work. I'd also like to keep the "0.0" along the vertical axis and not
> have it vanish.
>
> [2] Also, how do I change the data points to five-pointed stars?
>
> [3] Also, how do I know where threads posted to this email address
> appear on the Nabble forum, so that I can post to it and have my posts
> approved?
>
>
> Example:
>
> x <- c(-2.5, -1.3, 0.6, 0.8, 2.1)
> y <- c(0.3, 1.9, 1.4, 0.7, 1.1)
>
> plot(x, y, ylim=c(0, 2))
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] run a calculation function over time fields, ordered and grouped by variables

2015-05-25 Thread Jim Lemon
Hi gavinr,
Perhaps this will do what you want.

add_HH_MM<-function(x) {
 t1bits<-strsplit(as.character(x$tst),":")
 t2bits<-strsplit(as.character(x$tint),":")
 
hours<-as.numeric(lapply(t1bits,"[",1))+cumsum(as.numeric(lapply(t2bits,"[",1)))
 
minutes<-as.numeric(lapply(t1bits,"[",2))+cumsum(as.numeric(lapply(t2bits,"[",2)))
 next_hour<-minutes > 59
 # adjust for running into the next hour
 minutes[next_hour]<-minutes[next_hour]-60
 hours[next_hour]<-hours[next_hour]+1
 # adjust for running into the next day
 hours[hours > 23]<-hours[hours > 23]-24
 
return(paste(formatC(hours,width=2,flag=0),formatC(minutes,width=2,flag=0),sep=":"))
}

df$tarr<-unlist(by(df,df$rtid,add_HH_MM))

Jim


On Tue, May 26, 2015 at 5:28 AM, gavinr  wrote:
> I’ve got some transit data relating to bus stops for a GIS data set.  Each
> row represents one stop on a route.  For each record I have the start time
> of the route, a sequence in which a bus stops, the time the bus arrives at
> the first stop and the time taken to get to each of the stops from the last
> one in the sequence.  Not all sequences of stops starts with the number 1,
> some may start with a higher number.
> I need to make a new variable which has the time the bus arrives at each
> stop by using the start time from the stop with the lowest sequence number,
> to populate all of the arrival times for each stop in each route.
>
> I have a very simple example below with just three routes and a few stops in
> each.  My actual data set has a few million rows.  I've also created a
> version of the data set I'm aiming to get.
>
> There are two problems here.  Firstly getting the data into the correct
> format to do the calculations with
> durations, and secondly running a function over the data set to obtain the
> times.
> It is the durations that are critical not the date, so using the POSIX
> methods doesn’t really seem appropriate here.  Ultimately the times are
> going to be used in a route solver in an ArcSDE geodatabase.  I tried to use
> strptime to format my times, but could not get them into a data.frame as
> presumably they are a list.  In this example I’ve left them as strings.
>
> Any help is much appreciated.
>
> #create four columns with route id, stop sequence interval time and route
> start time
> ssq<-c(3,4,5,6,7,8,9,1,2,3,4,2,3,4,5,6,7,8)
> tint<-c("00:00","00:12","00:03","00:06","00:09","00:02","00:04","00:00","00:08","00:10","00:10","00:00","00:02","00:04","00:08","00:02","00:01","00:01")
> tst<-c(rep("18:20",7),rep("10:50",4),rep("16:15",7))
> rtid<-c(rep("a",7),rep("b",4),rep("c",7))
> df<-data.frame(cbind(ssq,tint,tst,rtid))
> df
>
> #correct data set should look like this
> tarr<-c("18:20","18:32","18:35","18:41","18:50","18:52","18:56","10:50","10:58","11:08","11:18","16:15","16:17","16:21","16:29","16:31","16:32","16:33")
> df2<-cbind(df,tarr)
> df2
>
>
>
>
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/run-a-calculation-function-over-time-fields-ordered-and-grouped-by-variables-tp4707655.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Stats course Phillip Island Nature Parks, Australia

2015-05-25 Thread Highland Statistics Ltd

Apologies for cross-posting


We would like to announce the following statistics course:

Course: Data exploration, regression, GLM & GAM with introduction to R
When: 14 - 18 September 2015
Where: Phillip Island Nature Parks, Australia
Course flyer: 
http://www.highstat.com/Courses/Flyers/Flyer2015_09PhillipIsland_regression_GLM_GAM.pdf

URL: http://www.highstat.com/statscourse.htm


Kind regards,

Alain Zuur



--
Dr. Alain F. Zuur

First author of:
1. Beginner's Guide to GAMM with R (2014).
2. Beginner's Guide to GLM and GLMM with R (2013).
3. Beginner's Guide to GAM with R (2012).
4. Zero Inflated Models and GLMM with R (2012).
5. A Beginner's Guide to R (2009).
6. Mixed effects models and extensions in ecology with R (2009).
7. Analysing Ecological Data (2007).

Highland Statistics Ltd.
9 St Clair Wynd
UK - AB41 6DZ Newburgh
Tel:   0044 1358 788177
Email: highs...@highstat.com
URL: www.highstat.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] run a calculation function over time fields, ordered and grouped by variables

2015-05-25 Thread jdnewmil

Another way:

#create four columns with route id, stop sequence interval time and 
route start time

ssq <- c( 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 8 )
tint <- c( "00:00", "00:12", "00:03", "00:06", "00:09", "00:02", "00:04"
 , "00:00", "00:08", "00:10", "00:10"
 , "00:00", "00:02", "00:04", "00:08", "00:02", "00:01", "00:01" 
)

tst <- c( rep( "18:20", 7 )
, rep( "10:50", 4 )
, rep( "16:15", 7 ) )
rtid <- c( rep( "a", 7 )
 , rep( "b", 4 )
 , rep( "c", 7 ) )
# Don't use cbind to make data frames... it usually ends up
# forcing all columns to be character or factors
# Also, avoid using "df" as a variable name... it is the name of
# a function in base R, so that gets confusing fast
DF <- data.frame( ssq, tint, tst, rtid, stringsAsFactors=FALSE )
DF

#correct data set should look like this
tarr <- c( "18:20", "18:32", "18:35", "18:41", "18:50", "18:52", "18:56"
 , "10:50", "10:58", "11:08", "11:18"
 , "16:15", "16:17", "16:21", "16:29", "16:31", "16:32", "16:33" 
)

DF2  <- data.frame( DF, tarr, stringsAsFactors=FALSE )
DF2

library(dplyr)
DFs <- (   DF
   %>% group_by( rtid )
   %>% mutate( tarr
= as.character(   as.POSIXct( tst, format="%H:%M", tz="GMT" 
)

+ as.difftime(
cumsum(
  as.numeric(
  as.difftime( tint, format="%H:%M" 
)

, units="mins"
)
  )
, units="mins"
)
  , format="%H:%M" )
 )
   %>% as.data.frame # removes grouping behavior from result
   )
identical( DFs, DF2 )

On 2015-05-25 15:43, Jim Lemon wrote:

Hi gavinr,
Perhaps this will do what you want.

add_HH_MM<-function(x) {
 t1bits<-strsplit(as.character(x$tst),":")
 t2bits<-strsplit(as.character(x$tint),":")

hours<-as.numeric(lapply(t1bits,"[",1))+cumsum(as.numeric(lapply(t2bits,"[",1)))

minutes<-as.numeric(lapply(t1bits,"[",2))+cumsum(as.numeric(lapply(t2bits,"[",2)))
 next_hour<-minutes > 59
 # adjust for running into the next hour
 minutes[next_hour]<-minutes[next_hour]-60
 hours[next_hour]<-hours[next_hour]+1
 # adjust for running into the next day
 hours[hours > 23]<-hours[hours > 23]-24

return(paste(formatC(hours,width=2,flag=0),formatC(minutes,width=2,flag=0),sep=":"))
}

df$tarr<-unlist(by(df,df$rtid,add_HH_MM))

Jim


On Tue, May 26, 2015 at 5:28 AM, gavinr  wrote:
I’ve got some transit data relating to bus stops for a GIS data set.  
Each
row represents one stop on a route.  For each record I have the start 
time
of the route, a sequence in which a bus stops, the time the bus 
arrives at
the first stop and the time taken to get to each of the stops from the 
last
one in the sequence.  Not all sequences of stops starts with the 
number 1,

some may start with a higher number.
I need to make a new variable which has the time the bus arrives at 
each
stop by using the start time from the stop with the lowest sequence 
number,

to populate all of the arrival times for each stop in each route.

I have a very simple example below with just three routes and a few 
stops in

each.  My actual data set has a few million rows.  I've also created a
version of the data set I'm aiming to get.

There are two problems here.  Firstly getting the data into the 
correct

format to do the calculations with
durations, and secondly running a function over the data set to obtain 
the

times.
It is the durations that are critical not the date, so using the POSIX
methods doesn’t really seem appropriate here.  Ultimately the times 
are
going to be used in a route solver in an ArcSDE geodatabase.  I tried 
to use
strptime to format my times, but could not get them into a data.frame 
as
presumably they are a list.  In this example I’ve left them as 
strings.


Any help is much appreciated.

#create four columns with route id, stop sequence interval time and 
route

start time
ssq<-c(3,4,5,6,7,8,9,1,2,3,4,2,3,4,5,6,7,8)
tint<-c("00:00","00:12","00:03","00:06","00:09","00:02","00:04","00:00","00:08","00:10","00:10","00:00","00:02","00:04","00:08","00:02","00:01","00:01")
tst<-c(rep("18:20",7),rep("10:50",4),rep("16:15",7))
rtid<-c(rep("a",7),rep("b",4),rep("c",7))
df<-data.frame(cbind(ssq,tint,tst,rtid))
df

#correct data set should look like this
tarr<-c("18:20","18:32","18:35","18:41","18:50","18:52","18:56","10:50","10:58","11:08","11:18","16:15","16:17","16:21","16:29","16:31","16:32","16:33")
df2<-cbind(df,tarr)
df2





--
View this message in context: 
http://r.789695.n4.nabble.com/run-a-calculation-function-over-time-fields-ordered-and-grouped-by-variables-tp4707655.html

Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To

Re: [R] How do I move the horizontal axis in a plot so that it starts at the zero of the vertical axis?

2015-05-25 Thread Jeff Newmiller
Regarding [3], you are better off not using Nabble for this EMAIL list at 
all... at best it does not encourage following the Posting Guide, and at worst 
it discourages people on the mailing list from answering your questions.
---
Jeff NewmillerThe .   .  Go Live...
DCN:Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
--- 
Sent from my phone. Please excuse my brevity.

On May 25, 2015 1:39:32 PM PDT, andrejfa...@ml1.net wrote:
>Greetings.
>
>[1] How do I move the horizontal axis in a plot so that it starts at
>the
>zero of the vertical axis? I tried using ylim=c(0, 2) but it doesn't
>work. I'd also like to keep the "0.0" along the vertical axis and not
>have it vanish.
>
>[2] Also, how do I change the data points to five-pointed stars?
>
>[3] Also, how do I know where threads posted to this email address
>appear on the Nabble forum, so that I can post to it and have my posts
>approved?
>
>
>Example:
>
>x <- c(-2.5, -1.3, 0.6, 0.8, 2.1)
>y <- c(0.3, 1.9, 1.4, 0.7, 1.1)
>
>plot(x, y, ylim=c(0, 2))
>
>__
>R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R CMD methods and ggplot2 advice

2015-05-25 Thread Glenn Schultz

Hello All,

I have two packages Bond Lab and the Companion to Investing in MBS.  Bond Lab 
clears the check and I am working on the on.load() to copy a needed directory 
per Duncan Murdoch's advise to make Bond Lab CRAN-able.  The companion passes 
with two notes.  The output is below:

I get two notes (highlighted in red)

Not sure on this one as I tried to declare methods but I get another note 
methods declared but not used.  The second note is related to ggplot2 they are 
variable passed to ggplot2 within a function.  I am not sure why they appear in 
the R CMD check.  If someone could point in the right direction to resolve 
these I would appreciate the advice.  These are my last remaining two issues to 
make the packages pass CRAN aside from the on.load()

Best Regards,
Glenn

* checking dependencies in R code ... NOTE
package ‘methods’ is used but not declared


* checking R code for possible problems ... NOTE
CreditEnhancement: no visible binding for global variable ‘Period’
CreditEnhancement: no visible binding for global variable ‘Value’
CreditEnhancement: no visible binding for global variable ‘Variable’
CreditEnhancement: no visible binding for global variable ‘..density..’
PassThrough.OAS: no visible binding for global variable ‘value’
PassThrough.OAS: no visible binding for global variable ‘..density..’
PassThroughCashFlow: no visible binding for global variable ‘Period’
PassThroughCashFlow: no visible binding for global variable ‘value’
PassThroughCashFlow: no visible binding for global variable ‘variable’
PlotMtgKeyRates: no visible binding for global variable ‘Tenor’
PlotMtgKeyRates: no visible binding for global variable ‘Duration’
PlotTermStructure: no visible binding for global variable ‘value’
PlotTermStructure: no visible binding for global variable ‘variable’
REMICOAS: no visible binding for global variable ‘value’
REMICOAS: no visible binding for global variable ‘..density..’
TotalReturn: no visible binding for global variable ‘Scenario’
TotalReturn: no visible binding for global variable ‘value’
TotalReturn: no visible binding for global variable ‘variable’
TwistScenario: no visible binding for global variable ‘Scenario’
TwistScenario: no visible binding for global variable ‘Return’
ValuationFramework: no visible binding for global variable ‘value’
ValuationFramework: no visible binding for global variable ‘variable’

==> devtools::check()

Updating Companion2IMBS documentation
Loading Companion2IMBS
'/Library/Frameworks/R.framework/Resources/bin/R' --vanilla CMD build  \
  '/Users/glennschultz/Companion to Investing in MBS' --no-resave-data  \
  --no-manual 

* checking for file ‘/Users/glennschultz/Companion to Investing in 
MBS/DESCRIPTION’ ... OK
* preparing ‘Companion2IMBS’:
* checking DESCRIPTION meta-information ... OK
* checking for LF line-endings in source and make files
* checking for empty or unneeded directories
Removed empty directory ‘Companion2IMBS/inst’
Removed empty directory ‘Companion2IMBS/test/testthat’
Removed empty directory ‘Companion2IMBS/test’
* building ‘Companion2IMBS_1.0.tar.gz’

'/Library/Frameworks/R.framework/Resources/bin/R' --vanilla CMD check  \
  
'/var/folders/tv/sq6gmnvs13j8jrhkt87f_zmcgn/T//RtmpOabREs/Companion2IMBS_1.0.tar.gz'
  \
  --timings 

* using log directory ‘/Users/glennschultz/Companion2IMBS.Rcheck’
* using R version 3.0.3 (2014-03-06)
* using platform: x86_64-apple-darwin10.8.0 (64-bit)
* using session charset: UTF-8
* checking for file ‘Companion2IMBS/DESCRIPTION’ ... OK
* checking extension type ... Package
* this is package ‘Companion2IMBS’ version ‘1.0’
* checking package namespace information ... OK
* checking package dependencies ... OK
* checking if this is a source package ... OK
* checking if there is a namespace ... OK
* checking for executable files ... OK
* checking for hidden files and directories ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking whether package ‘Companion2IMBS’ can be installed ... OK
* checking installed package size ... OK
* checking package directory ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking for left-over files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for non-ASCII characters ... OK
* checking R files for syntax errors ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the package can be unloaded cleanly ... OK
* checking whether the namespace can be loaded with stated dependencies ... OK
* checking whether the namespace can be unloaded cleanly ... OK
* checking dependencies in R code ... NOTE
package ‘methods’ is used but not declared
See the information on DESCRIPTION files in the chapter ‘Creating R
packages’ of the ‘Writing R Extensions’ manual.
* checking S3 generic/method consistency ... OK
* checking 

[R] Path analysis

2015-05-25 Thread Alberto Canarini
Hi there,

As I'm approaching path analysis I was wondering which packages may suite a 
path analysis for my data. My data are on interaction of soil biotic and 
abiotic factor, like microbial biomass carbon, soil carbon, water content, 
temperature etc.

Thanks in advance,

Best regards.

Alberto

Alberto Canarini
PhD Student l Faculty of Agriculture and Environment
THE UNIVERSITY OF SYDNEY
Shared room l CCWF l Camden Campus l NSW 2570
P 02 935 11892


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.