Re: [R] Question on Simultaneous Equations & Forecasting

2017-07-13 Thread Pfaff, Bernhard Dr.
Hi Frances,

I have not touched the system.fit package for quite some time, but to solve 
your problem the following two pointers might be helpful:

1) Recast your model in the revised form, i.e., include your identity directly 
into your reaction functions, if possible.
2) For solving your model, you can employ the Gauß-Seidel method (see 
https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method).
This has not only the advantage of generating forecasts, in terms of your 
exogenous variables, but you can also compute 'dynamic ex post' forecasts. This 
is probably the most powerful testing for dynamic simultaneous equation 
systems, given that you provide only your predetermined variables as starting 
values and then apply the Gauss-Seidel method (recursively) in-sample. The 
progressions of your endogenous variables should then not depart too much from 
your observed in-sample endogenous variables, i.e., you are assessing the 
stability of your model. Because forecast-errors cumulate over time in a 
dynamic ex-post forecast, this is a rather good and stringent model-test.

Incidentally, when you use simultaneous equation models on a larger scale (say, 
between 200-300 equations, like medium-sized macroeconomic models), the only 
route to go for, is by estimating your reaction equations separately and then 
putting all your pieces - including identities and/or technical equations - 
together in a format suitable for applying the Gauss-Seidel method. Hence, 
forget about 2SLS or 3SLS and Haavelmo-bias.

Best wishes,
Bernhard   

-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von OseiBonsu, 
Frances
Gesendet: Mittwoch, 12. Juli 2017 22:36
An: r-help@r-project.org
Betreff: [EXT] [R] Question on Simultaneous Equations & Forecasting

Hello,

I have estimated a simultaneous equation model (similar to Klein's model) in R 
using the system.fit package.

I have an identity equation, along with three other equations. Do you know how 
to explicitly identify the identity equation in R?

I am also trying to forecast the dependent variables in the simultaneous 
equation model, while incorporating the identity equation in the forecasts. Is 
there a way to do this in R?

The only way that I have been able to forecast the dependent variables has been 
by getting the predictions of each variable, converting them to time series 
uni-variables, and forecasting each variable individually.

Any help would be appreciated.

Best Regards,

Frances Osei-Bonsu
Summer Analyst, Research and Strategy
LaSalle Investment Management
333 West Wacker Drive, Suite 2300, Chicago IL 60606 Email 
frances.oseibo...@lasalle.com
Tel +1 312 897 4024
lasalle.com



This email is for the use of the intended recipient(s) only. If you have 
received this email in error, please notify the sender immediately and then 
delete it. If you are not the intended recipient, you must not keep, use, 
disclose, copy or distribute this email without the author's prior permission. 
We have taken precautions to minimize the risk of transmitting software 
viruses, but we advise you to carry out your own virus checks on any attachment 
to this message. We cannot accept liability for any loss or damage caused by 
software viruses. The information contained in this communication may be 
confidential and may be subject to the attorney-client privilege. If you are 
the intended recipient and you do not wish to receive similar electronic 
messages from us in the future then please respond to the sender to this effect.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Question on Simultaneous Equations & Forecasting

2017-07-13 Thread Pfaff, Bernhard Dr.
Who was speaking about non-linear models in the first place???
The Klein-Model(s) and pretty much all simultaneous equation models encountered 
in macro-econometrics are linear and/or can contain linear approximations to 
non-linear relationships, e.g., production functions of the Cobb-Douglas type. 

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: Berend Hasselman [mailto:b...@xs4all.nl] 
Gesendet: Donnerstag, 13. Juli 2017 10:53
An: OseiBonsu, Frances
Cc: Pfaff, Bernhard Dr.; r-help@r-project.org
Betreff: [EXT] Re: [R] Question on Simultaneous Equations & Forecasting

Frances,

I would not advise Gauss-Seidel for non linear models. Can be quite tricky, 
slow and diverge.

You can write your model as a non linear system of equations and use one of the 
nonlinear solvers.
See the section "Root Finding" in the task view NumericalMathematics suggesting 
three packages (BB, nleqslv and ktsolve). These package are certainly able to 
handle medium sized models.
(https://cran.r-project.org/web/views/NumericalMathematics.html)

Write a function with the system of equations with each equation written as 

 y[..] <- lefthandside - (righthandside)

You can then include identities naturally.

You would have to make the model dynamic but that shouldn't be too difficult 
using vector indexing.

Berend Hasselman

> On 13 Jul 2017, at 10:06, Pfaff, Bernhard Dr. 
>  wrote:
> 
> Hi Frances,
> 
> I have not touched the system.fit package for quite some time, but to solve 
> your problem the following two pointers might be helpful:
> 
> 1) Recast your model in the revised form, i.e., include your identity 
> directly into your reaction functions, if possible.
> 2) For solving your model, you can employ the Gauß-Seidel method (see 
> https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method).
> This has not only the advantage of generating forecasts, in terms of your 
> exogenous variables, but you can also compute 'dynamic ex post' forecasts. 
> This is probably the most powerful testing for dynamic simultaneous equation 
> systems, given that you provide only your predetermined variables as starting 
> values and then apply the Gauss-Seidel method (recursively) in-sample. The 
> progressions of your endogenous variables should then not depart too much 
> from your observed in-sample endogenous variables, i.e., you are assessing 
> the stability of your model. Because forecast-errors cumulate over time in a 
> dynamic ex-post forecast, this is a rather good and stringent model-test.
> 
> Incidentally, when you use simultaneous equation models on a larger scale 
> (say, between 200-300 equations, like medium-sized macroeconomic models), the 
> only route to go for, is by estimating your reaction equations separately and 
> then putting all your pieces - including identities and/or technical 
> equations - together in a format suitable for applying the Gauss-Seidel 
> method. Hence, forget about 2SLS or 3SLS and Haavelmo-bias.
> 
> Best wishes,
> Bernhard   
> 
> -Ursprüngliche Nachricht-
> Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von 
> OseiBonsu, Frances
> Gesendet: Mittwoch, 12. Juli 2017 22:36
> An: r-help@r-project.org
> Betreff: [EXT] [R] Question on Simultaneous Equations & Forecasting
> 
> Hello,
> 
> I have estimated a simultaneous equation model (similar to Klein's model) in 
> R using the system.fit package.
> 
> I have an identity equation, along with three other equations. Do you know 
> how to explicitly identify the identity equation in R?
> 
> I am also trying to forecast the dependent variables in the simultaneous 
> equation model, while incorporating the identity equation in the forecasts. 
> Is there a way to do this in R?
> 
> The only way that I have been able to forecast the dependent variables has 
> been by getting the predictions of each variable, converting them to time 
> series uni-variables, and forecasting each variable individually.
> 
> Any help would be appreciated.
> 
> Best Regards,
> 
> Frances Osei-Bonsu
> Summer Analyst, Research and Strategy
> LaSalle Investment Management
> 333 West Wacker Drive, Suite 2300, Chicago IL 60606 Email 
> frances.oseibo...@lasalle.com<mailto:frances.oseibo...@lasalle.com>
> Tel +1 312 897 4024
> lasalle.com<http://www.lasalle.com/>
> 
> 
> 
> This email is for the use of the intended recipient(s) only. If you have 
> received this email in error, please notify the sender immediately and then 
> delete it. If you are not the intended recipient, you must not keep, use, 
> disclose, copy or distribute this email without the author's prior 
> permission. We have taken precautions to minimize the risk of transmitting 
> software viruses, but we advi

Re: [R] ca.jo function, urca package, singular matrix problem

2015-07-09 Thread Pfaff, Bernhard Dr.
Watch out for the pre-sample values (K = 2); hence you have supplied a dumvar 
consisting of zeros only, in your first example.

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von mrrox
Gesendet: Donnerstag, 9. Juli 2015 15:51
An: r-help@r-project.org
Betreff: [R] ca.jo function, urca package, singular matrix problem

Hi, I am trying to run a cointegration test with a dummy variable using 
`*ca.jo*` function in `*urca*` package. 

*johcoint=ca.jo(Ydata[10:60,1:5],type="trace",ecdet=c("const"),K=2,spec="transitory",dumvar=dumvar)
*
 `*dumvar*` is the binary variable that take 1 and 0 only. the first two 
observations are 1 and the rest are 0s. 
when I run the code, I get 

   / Error in solve.default(M11) : 
  Lapack routine dgesv: system is exactly singular: U[1,1] = 0/

I think this is something to do with the invertability of the input matrix, and 
this occurs only when I use `*dumvar*` only. The error message disappears if I 
add a 1 to the 3rd observation of `dumvar`.

Below is the sample data just for info:
   

  A BC D E  
dumvar
1  2.255446 1.688807 1.506579 1.880152 9.575868  1
2  2.230118 1.578281 1.546805 1.905426 9.545534  1
3  2.255446 1.688807 1.506579 1.880152 9.575868  0
4  2.230118 1.578281 1.546805 1.905426 9.545534  0
5  2.255446 1.688807 1.506579 1.880152 9.575868  0
6  2.230118 1.578281 1.546805 1.905426 9.545534  0
7  2.255446 1.688807 1.506579 1.880152 9.575868  0
8  2.230118 1.578281 1.546805 1.905426 9.545534  0
9  2.255446 1.688807 1.506579 1.880152 9.575868  0
10 2.230118 1.578281 1.546805 1.905426 9.545534  0
11 2.255446 1.688807 1.506579 1.880152 9.575868  0
12 2.230118 1.578281 1.546805 1.905426 9.545534  0
13 2.255446 1.688807 1.506579 1.880152 9.575868  0
14 2.230118 1.578281 1.546805 1.905426 9.545534  0
15 2.255446 1.688807 1.506579 1.880152 9.575868  0
16 2.230118 1.578281 1.546805 1.905426 9.545534  0
17 2.255446 1.688807 1.506579 1.880152 9.575868  0
18 2.230118 1.578281 1.546805 1.905426 9.545534  0
19 2.255446 1.688807 1.506579 1.880152 9.575868  0
20 2.230118 1.578281 1.546805 1.905426 9.545534  0
21 2.255446 1.688807 1.506579 1.880152 9.575868  0
22 2.230118 1.578281 1.546805 1.905426 9.545534  0
23 2.255446 1.688807 1.506579 1.880152 9.575868  0
24 2.230118 1.578281 1.546805 1.905426 9.545534  0
25 2.255446 1.688807 1.506579 1.880152 9.575868  0
26 2.230118 1.578281 1.546805 1.905426 9.545534  0
27 2.255446 1.688807 1.506579 1.880152 9.575868  0
28 2.230118 1.578281 1.546805 1.905426 9.545534  0
29 2.255446 1.688807 1.506579 1.880152 9.575868  0
30 2.230118 1.578281 1.546805 1.905426 9.545534  0
31 2.255446 1.688807 1.506579 1.880152 9.575868  0
32 2.230118 1.578281 1.546805 1.905426 9.545534  0
33 2.255446 1.688807 1.506579 1.880152 9.575868  0
34 2.230118 1.578281 1.546805 1.905426 9.545534  0
35 2.255446 1.688807 1.506579 1.880152 9.575868  0
36 2.230118 1.578281 1.546805 1.905426 9.545534  0
37 2.255446 1.688807 1.506579 1.880152 9.575868  0
38 2.230118 1.578281 1.546805 1.905426 9.545534  0
39 2.255446 1.688807 1.506579 1.880152 9.575868  0
40 2.230118 1.578281 1.546805 1.905426 9.545534  0
41 2.255446 1.688807 1.506579 1.880152 9.575868  0
42 2.230118 1.578281 1.546805 1.905426 9.545534  0
43 2.255446 1.688807 1.506579 1.880152 9.575868  0
44 2.230118 1.578281 1.546805 1.905426 9.545534  0
45 2.255446 1.688807 1.506579 1.880152 9.575868  0
46 2.230118 1.578281 1.546805 1.905426 9.545534  0
47 2.255446 1.688807 1.506579 1.880152 9.575868  0
48 2.230118 1.578281 1.546805 1.905426 9.545534  0
49 2.255446 1.688807 1.506579 1.880152 9.575868  0
50 2.230118 1.578281 1.546805 1.905426 9.545534  0

Thank you!



--
View this message in context: 
http://r.789695.n4.nabble.com/ca-jo-function-urca-package-singular-matrix-problem-tp4709635.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.

Re: [R] where is XMLRPC for R>3.0 for Windows machines

2014-12-19 Thread Pfaff, Bernhard Dr.
Hello Diogo,

the package is hosted on Omegahat:

http://www.omegahat.org/XMLRPC/

Best wishes,
Bernhard

-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von Diogo André 
Alagador
Gesendet: Freitag, 19. Dezember 2014 14:03
An: r-help@r-project.org
Betreff: [R] where is XMLRPC for R>3.0 for Windows machines

 

I am need to install rneos for R 3.1 under Windows 64bit. 

However it depends on the package XMLRPC that is not available in conventional 
repositories.

In the CRAN R 3.1 online readme
(http://cran.r-project.org/bin/windows/contrib/3.1/ReadMe) there is an 
information regarding the installation of package XMLRP for Windows that direct 
users to the Prof Ripley s link 
(http://www.stats.ox.ac.uk/pub/RWin/bin/windows/contrib/3.1/). However the zip 
file is not there neither for the 3.0 r version.

Anyone can inform me on how can obtain it?

 

Best regards,

Diogo Alagador

 
http://www.cibioue.uevora.pt/9-uncategorised/185-dr-diogo-alagador

CIBIO/UE - Research Center in Biodiversity and Genetic Resources, University of 
 vora, Portugal

 

 

 


[[alternative HTML version deleted]]

*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Johansen Test of Cointegration:How to access rows in R output

2015-10-05 Thread Pfaff, Bernhard Dr.
RTFM: help("ca.jo-class")

library(urca)
example(ca.jo)
class(sjf.vecm)
slotNames(sjf.vecm)
slot(sjf.vecm, "cval")
slot(sjf.vecm, "teststat")
slot(sjf.vecm, "V")
slot(sjf.vecm, "Vorg")

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von Preetam Pal
Gesendet: Sonntag, 4. Oktober 2015 18:43
An: r-help@r-project.org
Betreff: [R] Johansen Test of Cointegration:How to access rows in R output

Hi guys,

I ran ca.jo(data,type="trace", ecdet="none",k=2) i.e. Johansen's Trace test on 
R-Studio (package: "urca")and got the output below:

I have 3 questions about this:

A> How do I programmatically access the columns("test", "10pct" etc) in 
A> any
row corresponding to, say, r < = 1 in the output?  I mean, I shall only provide 
the r-value as an input and all the (column name, column value) pairs will be 
outputted to me.

B> How do I write a code that will check if the null hypotheses for r 
B> =0,
<= 1, <= 2 and so on  (in this order) are rejected or not; and in case one of 
them is rejected, it checks the next higher value of r and gives the final 
inference , something like "Null not rejected for r <= *appropriate r-value 
from this table* " or "All nulls rejected" or "No null rejected'.

C> Also, I need to extract the eigen vectors from the Cointegration 
C> Matrix
below to get the cointegrated transfoms.

I have attached the data for your perusal. If I need to provide anything more, 
please let me know.

Regards,
Preetam

##
# Johansen-Procedure #
##

Test type: trace statistic , with linear trend in cointegration

Eigenvalues (lambda):
[1] 7.935953e-01 5.444372e-01 4.985327e-01 2.562245e-01 5.551115e-16

Values of teststatistic and critical values of test:

  test 10pct  5pct  1pct
r <= 3 |  6.81 10.49 12.25 16.26
r <= 2 | 22.68 22.76 25.32 30.45
r <= 1 | 40.77 39.06 42.44 48.45
r = 0  | 77.06 59.14 62.99 70.05

Eigenvectors, normalised to first column:
(These are the cointegration relations)

   GDP.l2 HPA.l2   FX.l2Y.l2   trend.l2
GDP.l21.0  1.000  1.  1.  1.000
HPA.l22.52550  0.1569079  0.08077351 -0.22777550 -0.9178250
FX.l2-8.643729121 -2.5815150  0.17158404 -0.47053012 -4.8528875
Y.l2  0.805229998 -1.4241546  0.07767540  0.02303305  0.5213294
trend.l2  0.006283314  0.0385276 -0.01512016  0.01986813 -0.9516072

Weights W:
(This is the loading matrix)

   GDP.l2  HPA.l2  FX.l2Y.l2  trend.l2
GDP.d  0.03055313 -0.04681978 -0.8376985 -0.04220534 -1.271960e-17 HPA.d 
-0.22649596 -0.24287691 -1.6358880  2.03813569 -8.002467e-17
FX.d   0.10327579  0.15150469 -0.1649066  0.37449910 -2.570250e-18
Y.d   -0.35200485  0.56808024 -5.7829738  0.01000965  1.730461e-16
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Granger-causality test using vars package

2017-01-23 Thread Pfaff, Bernhard Dr.
Dear T.Riedle,

you cannot assign *all* variables as a cause at once. Incidentally, in your 
example, you missed a 'data(Canada)'.
Having said this, you can loop over the variables names and extract the 
statistic/p-values. These are contained as named list elements 'statistic' and 
'p.value' in the returned list object 'Granger' which is of informal class 
'htest'.

Best wishes,
Bernhard

-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von T.Riedle
Gesendet: Sonntag, 22. Januar 2017 14:11
An: R-help@r-project.org
Betreff: [EXT] [R] Granger-causality test using vars package

Dear R-users,

I am trying to compute the test statistics for Granger-causality for a VAR(p) 
model using the "vars" package. I simply used the example proposed by the vars 
vignette and added the code for the Granger-causality. The code looks as follows



library(vars)

Canada<-Canada[, c("prod", "e", "U", "rw")] p1ct<-VAR(Canada, p=1, type = 
"both")

causality(p1ct, cause = c("prod","e","U","rw"))



Unfortunately I get the error

Error in `[<-`(`*tmp*`, i, w[i], value = 1) : subscript out of bounds



Does any body know what is wrong with the code? I would like to create a matrix 
containing the F-values and the corresponding significance. How can I do that?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Granger-causality test using vars package

2017-01-23 Thread Pfaff, Bernhard Dr.
Dear T.Riedle,

it is a 'combined' test, see ?causality for a formal description of the test 
statistic. 
If you would like results on an 'equation' by equation' approach, you could 
employ anova() on restricted and unrestricted lm-objects.

Best wishes,
Bernhard


-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von T.Riedle
Gesendet: Montag, 23. Januar 2017 13:26
An: R-help@r-project.org
Betreff: [EXT] Re: [R] Granger-causality test using vars package

Thank you for your reply. The code follows the example in the vignette and I 
changed it only a little as shown below.

library(vars)
data(Canada)
summary(Canada)
stat.desc(Canada,basic=FALSE)
plot(Canada, nc=2, xlab="")

# Testing for unit roots using ADF
adf1<-adf.test(Canada[,"prod"])
adf1
adf2<-adf.test(Canada[,"e"])
adf2
adf3<-adf.test(Canada[,"U"])
adf3
adf4<-adf.test(Canada[,"rw"])
adf4

# Use VAR to create a list of class varest Canada<-Canada[, c("prod", "e", "U", 
"rw")] p1ct<-VAR(Canada, p=1, type = "both") p1ct summary(p1ct, equation="e") 
plot(p1ct, names = "e")

#Run Granger-causality test
causality(p1ct)

The Granger-causality test returns following output $Granger

Granger causality H0: prod do not Granger-cause e U rw

data:  VAR object p1ct
F-Test = 11.956, df1 = 3, df2 = 308, p-value = 1.998e-07


$Instant

H0: No instantaneous causality between: prod and e U rw

data:  VAR object p1ct
Chi-squared = 3.7351, df = 3, p-value = 0.2915


Warning message:
In causality(p1ct) : 
Argument 'cause' has not been specified; using first variable in 'x$y' (prod) 
as cause variable.

I am struggling with the result as it is not clear to me whether the variable 
prod Granger-causes e or U or rw. H0 is that prod does not Granger-cause e U 
rw. What does that mean? How can I find out if prod Granger-causes e, U and rw, 
respectively i.e. how can I determine that prod Granger-causes e, U and rw?

Thanks for your support in advance.

From: Pfaff, Bernhard Dr. 
Sent: 23 January 2017 09:12
To: T.Riedle; R-help@r-project.org
Subject: AW:  [R] Granger-causality test using vars package

Dear T.Riedle,

you cannot assign *all* variables as a cause at once. Incidentally, in your 
example, you missed a 'data(Canada)'.
Having said this, you can loop over the variables names and extract the 
statistic/p-values. These are contained as named list elements 'statistic' and 
'p.value' in the returned list object 'Granger' which is of informal class 
'htest'.

Best wishes,
Bernhard

-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von T.Riedle
Gesendet: Sonntag, 22. Januar 2017 14:11
An: R-help@r-project.org
Betreff: [EXT] [R] Granger-causality test using vars package

Dear R-users,

I am trying to compute the test statistics for Granger-causality for a VAR(p) 
model using the "vars" package. I simply used the example proposed by the vars 
vignette and added the code for the Granger-causality. The code looks as follows



library(vars)

Canada<-Canada[, c("prod", "e", "U", "rw")] p1ct<-VAR(Canada, p=1, type = 
"both")

causality(p1ct, cause = c("prod","e","U","rw"))



Unfortunately I get the error

Error in `[<-`(`*tmp*`, i, w[i], value = 1) : subscript out of bounds



Does any body know what is wrong with the code? I would like to create a matrix 
containing the F-values and the corresponding significance. How can I do that?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:9}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Impose Structure for Exogenous in vars Package

2017-02-27 Thread Pfaff, Bernhard Dr.
Hi Andrew,

if I understand your question correctly, then you would like to place 
constraints for your exogenous variables in some VAR equations.
If so, please have a look at ?restrict.

As a toy example:

library(vars)
?restrict
data(Canada)
N <- nrow(Canada)
ExoVar <- matrix(runif(N))
colnames(ExoVar) <- "Exogenous"
mod <- VAR(Canada, exogen = ExoVar)
summary(mod)
summary(restrict(mod))

here, ExoVar will be removed given that the plain vanilla call to restrict() 
removes all variables with insignificant coeffiecients (|t-stat| < 2.0)   in a 
VAR equation. 
You can also provide a 'constraint' matrix for entering zero-constraints.

HTH,
Bernhard



-Ursprüngliche Nachricht-
Von: R-help [mailto:r-help-boun...@r-project.org] Im Auftrag von Castro, Andrew 
William Keahi
Gesendet: Donnerstag, 23. Februar 2017 15:42
An: r-help@R-project.org
Betreff: [EXT] [R] Impose Structure for Exogenous in vars Package

Hello everyone,

I see there are structural VAR options in the vars package for the endogenous 
variables, but is there any easy way to impose structure on the exogenous 
variable matrix (notated as the matrix C on page 45 of 
https://cran.r-project.org/web/packages/vars/vars.pdf). Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Impulse response analysis within package vars

2008-08-07 Thread Pfaff, Bernhard Dr.
hello Sam,

just rescale the result. Please note that *unit change* refers to the error 
term. By the same token you can also rescale the impulse responses by making 
use of the standard deviation of the residuals.

Best,
Bernhard

>
>Hi Everyone
>
> > var.2c <- VAR(Canada,p=2,type="const")
> > irf.rw.e <- irf(var.2c,impulse="rw",response=c("e"))
>
>...makes *vars* to compute the orthogonalised impulse responses to a
>*unit* change in variable rw.
>Now, if I want to compute the the orthogonalised impulse responses to a
>*0.25* change in variable rw, how do I do that?
>
>Do I have to alter something in the vars-code?
>
>Regards,
>
>Sam
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing Rcmdr Plugins

2008-08-21 Thread Pfaff, Bernhard Dr.
Dear Irina,

though you asked explicitly for writing a RCommander-plugin package; I
just wanted to add that the former approach of tailor-making menues in
the Commander still works. That is, just copy your R file with the
tcl/tk functions into the /etc directory of the RCommander and include
your menu structure into the file "Rcmdr-menus.txt"

Best,
Bernhard

>
>Dear Irina,
>
>
>On Wed, Aug 20, 2008 at 12:02 PM, Irina Ursachi
><[EMAIL PROTECTED]> wrote:
>> Dear all,
>>
>> I am trying to write a plugin for the RCommander and having troubles
>> understanding how to actually do that. I have read Mr. Fox's tutorial
>> about writing Rcmdr plugins and though it seems to me that some
>> steps are missing.  I would like to know, whether there are some
>> Commands which generate a plugin package out of a given 
>library. Or do
>> we just have to attach the .First.lib function and the 
>menus.txt data to
>> the library? Out of the RcmdrPlugin.TeachingDemos example I 
>could see,
>> that one of the differences between the plugin and the 
>library, is the
>> fact that the plugin package also contains the "etc" 
>subdirectory... so
>> I suppose my plugin package should contain one too, right?
>> The way I was trying to build my plugin package (and it didn't work )
>> was: I installed Rcmdr, the latest version locally, in a folder named
>> "Rlib". There is where I also installed my library and 
>trying to use the
>> "require" command to load it into R Commander. What I don't 
>understand,
>> is how to build the RcmdrPlugin.*?
>> I would really appreciate, if somebody could help me with 
>that, or maybe
>> just recommend me a tutorial.
>>
>> Best regards,
>> Irina Ursachi.
>>
>
>In general, you can read the R-manuals (in particular, the "Writing R
>Extensions" one) to make sure that you have all of absolutely
>necessary components (a DESCRIPTION, an /R directory, all of the
>required documentation, etc.)
>
>That being said, the basic format of an RcmdrPlugin is:
>
>The /R directory:  this is where you put all of the Tcl/Tk functions
>for your menus that you want to add.  In addition, any extra functions
>that you wrote that are new should go in here.  You will need in here
>the .First.lib function that John mentioned in his article, in the
>format that he suggested.
>
>The /man directory:  this has all of the .Rd help files.   One of
>these files will likely be titled something like
>"RcmdrPlugin.foo-internal.Rd".  This is where you will put aliases to
>many of the functions that are 'internal' in the sense that they are
>not to be called by the user.
>
>The /inst directory:  in here you will need to put a file "menus.txt".
> John Fox's article gives you lots of details about how to set it up
>correctly.  You only need to include lines for the menus that you are
>specifically adding.  And now with Rcmdr_1.4-0, you can include lines
>for menus that you prefer _not_ to appear (this really helps with
>making the menus less cluttered when multiple plugins are loaded).
>
>The /data directory: optional.  Do you want to include data 
>with your plugin?
>
>Your DESCRIPTION file should be set up as the article recommends, with
>the DEPENDS argument showing at least Rcmdr 1.3.0 (and you would be
>advised to go with Rcmdr 1.4.0).  See the article about the details of
>DESCRIPTION.
>
>There are a lot more things that you can do;  please see the manuals
>for details.  But the above has the bare essentials for a RcmdrPlugin
>(I don't believe that I am forgetting anything.)
>
>Once you have those essential parts, then the RcmdrPlugin is built and
>installed in exactly the same way as any other R package (again, see
>the manuals  I see an "R CMD build" in your future...  :-).  There
>are many, many messages on the R-help archive to help you for your
>operating system, and many people have written online tutorials.
>
>Perhaps the best advice is to download the source code
>RcmdrPlugin.x.tar.gz from CRAN (e.g. RcmdrPlugin.IPSUR), extract
>the archive, and take a look at how other people have done it.
>
>I hope that this helps, and good luck!  :-)
>
>Jay
>
>
>
>
>***
>G. Jay Kerns, Ph.D.
>Associate Professor
>Department of Mathematics & Statistics
>Youngstown State University
>Youngstown, OH 44555-0002 USA
>Office: 1035 Cushwa Hall
>Phone: (330) 941-3310 Office (voice mail)
>-3302 Department
>-3170 FAX
>E-mail: [EMAIL PROTECTED]
>http://www.cc.ysu.edu/~gjkerns/
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailm

Re: [R] VAR (pckg: vars) and memory problem

2009-08-17 Thread Pfaff, Bernhard Dr.
Dear Bernd,

which version of the package vars are you using? Have tried estimating 
estimating the VAR first and only? Within the function VAR() the equations are 
estimated by lm(). Would you be so kind and send the result of traceback()?

Best,
Bernhard
 

>-Ursprüngliche Nachricht-
>Von: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] Im Auftrag von 
>herrdittm...@yahoo.co.uk
>Gesendet: Samstag, 15. August 2009 15:47
>An: r-help@r-project.org
>Betreff: [R] VAR (pckg: vars) and memory problem
>
>Hi all,
>
>When I tried to estimate a VAR (package vars) of a rather 
>large dataset with 5 lags:
>
>
>> dim(trial.var) 
>[1] 20388 2 
>
>
>I ran into memory troubles:
>
>
>> summary(VAR(trial.var, type="none", p=5)) 
>Error: cannot allocate vector of size 3.1 Gb 
>In addition: Warning messages: 
>1: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>  Reached total allocation of 1535Mb: see help(memory.size) 
>2: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>  Reached total allocation of 1535Mb: see help(memory.size) 
>3: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>  Reached total allocation of 1535Mb: see help(memory.size) 
>4: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>  Reached total allocation of 1535Mb: see help(memory.size) 
>
>
>Luckily, I was able to slice and dice my dataset into 
>individual days with ca. 3000 lines each and estimated each subset.
>
>Now, I nonetheless would like to run the VAR over the whole set.
>
>Is there any way I can extend the memory used by R? Perhaps 
>forcing it? I am running R on a XP box with 1GB RAM. 
>
>
>Many thanks for any pointers.
>
>Bernd
>--
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VAR (pckg: vars) and memory problem

2009-08-18 Thread Pfaff, Bernhard Dr.
Hello Bernd,

many thanks for providing the details. As you can see from traceback, the 
warning refers to the calculation of the value of the log-likelihood, which is 
used in vars:::logLik.varest. I will slice the calculation and hopefully this 
will resolve the memory problems (update on R-Forge first and then on CRAN). In 
the meantime, you can use the summary method for lm objects and lapply these:
lapply(foo$varresult, summary)

in order to obtain summary results for the individual equations.


Best,
Bernhard 
 

>-Ursprüngliche Nachricht-
>Von: herrdittm...@yahoo.co.uk [mailto:herrdittm...@yahoo.co.uk] 
>Gesendet: Montag, 17. August 2009 18:27
>An: Pfaff, Bernhard Dr.; r-help@r-project.org
>Betreff: Re: AW: [R] VAR (pckg: vars) and memory problem
>
>Dear Bernard, 
>
>
>Please find attached the output of traceback() below for this 
>rather large VAR. I am using vars 1.4-6 at the moment:
>
>
>> dim(var.trial)
>[1] 22367 3  
>
>
>> summary(VAR(var.trial, type="none", p=3))
>Error: cannot allocate vector of size 3.7 GbIn addition: 
>Warning messages:
>1: In diag(resids %*% solve(Sigma) %*% t(resids)) :  Reached 
>total allocation of 1535Mb: see help(memory.size)
>2: In diag(resids %*% solve(Sigma) %*% t(resids)) :  Reached 
>total allocation of 1535Mb: see help(memory.size)
>3: In diag(resids %*% solve(Sigma) %*% t(resids)) :  Reached 
>total allocation of 1535Mb: see help(memory.size)
>4: In diag(resids %*% solve(Sigma) %*% t(resids)) :  Reached 
>total allocation of 1535Mb: see help(memory.size)  
>
>
>> traceback()
>6: diag(resids %*% solve(Sigma) %*% t(resids))5: logLik.varest(object)
>4: logLik(object)
>3: summary.varest(VAR(var.trial, type = "none", p = 3))
>2: summary(VAR(var.trial, type = "none", p = 3))
>1: summary(VAR(var.trial, type = "none", p = 3))  
>
>
>Oddly enough, VAR(...) itself returned the regressors. Am I 
>missing something rather trivial? 
>
>Many thanks in advance and best regards,
>
>Bernd 
>
>--
>
>-Original Message-
>From: "Pfaff, Bernhard Dr." 
>
>Date: Mon, 17 Aug 2009 09:03:03 
>To: ; 
>Subject: AW: [R] VAR (pckg: vars)  and memory problem
>
>
>Dear Bernd,
>
>which version of the package vars are you using? Have tried 
>estimating estimating the VAR first and only? Within the 
>function VAR() the equations are estimated by lm(). Would you 
>be so kind and send the result of traceback()?
>
>Best,
>Bernhard
> 
>
>>-Ursprüngliche Nachricht-
>>Von: r-help-boun...@r-project.org 
>>[mailto:r-help-boun...@r-project.org] Im Auftrag von 
>>herrdittm...@yahoo.co.uk
>>Gesendet: Samstag, 15. August 2009 15:47
>>An: r-help@r-project.org
>>Betreff: [R] VAR (pckg: vars) and memory problem
>>
>>Hi all,
>>
>>When I tried to estimate a VAR (package vars) of a rather 
>>large dataset with 5 lags:
>>
>>
>>> dim(trial.var) 
>>[1] 20388 2 
>>
>>
>>I ran into memory troubles:
>>
>>
>>> summary(VAR(trial.var, type="none", p=5)) 
>>Error: cannot allocate vector of size 3.1 Gb 
>>In addition: Warning messages: 
>>1: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>>  Reached total allocation of 1535Mb: see help(memory.size) 
>>2: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>>  Reached total allocation of 1535Mb: see help(memory.size) 
>>3: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>>  Reached total allocation of 1535Mb: see help(memory.size) 
>>4: In diag(resids %*% solve(Sigma) %*% t(resids)) : 
>>  Reached total allocation of 1535Mb: see help(memory.size) 
>>
>>
>>Luckily, I was able to slice and dice my dataset into 
>>individual days with ca. 3000 lines each and estimated each subset.
>>
>>Now, I nonetheless would like to run the VAR over the whole set.
>>
>>Is there any way I can extend the memory used by R? Perhaps 
>>forcing it? I am running R on a XP box with 1GB RAM. 
>>
>>
>>Many thanks for any pointers.
>>
>>Bernd
>>--
>>__
>>R-help@r-project.org mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-help
>>PLEASE do read the posting guide 
>>http://www.R-project.org/posting-guide.html
>>and provide commented, minimal, self-contained, reproducible code.
>>
>*
>Confidentiality Note: The information contained in this message,
>and any attachments, may contain confidential and/or privileged
>material. It 

Re: [R] : How wo read stability VAR plot?

2009-09-11 Thread Pfaff, Bernhard Dr.
>
>
>I have made program code for Vector Auto Regressive in terms
>of completing my undergraduate program using R. I have an important
>question related to my project.
>If I have:
>data(Canada)
>var.2c <- VAR(Canada, p = 2, type = "const")
>var.2c.stabil <- stability(var.2c, type = "OLS-CUSUM")
>I want to get the value of plot(var.2c.stabil). Can you 
>help me what should I do or write so the result can occur?

Dear Arif,

the stability function employs the package strucchange. Have a look at

str(var.2c.stabil$stability)

and then, for a particular equation (e.g. unemployment)

str(var.2c.stabil$stability$U)


the data for the process is contained in the list element "process".

HTH,
Bernhard

> 
>It means if I have source code:
>data(Canada)
>x=acf(Canada)
> 
>I will get the value of acf if I write x in R. Thanks in advance
>_
>
>
>ry-edit.aspx
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VAR with contemporaneous effects

2010-03-12 Thread Pfaff, Bernhard Dr.
Dear Mitch,

have you taken a look at ?SVAR in package (vars), though the inclusion of 
exogenous variables is currently not supported.
In principle, your model form is a simultaneous interdependent multiple 
equation model. For estimating these kind of models have a look at the package 
systemfit and/or you can estimate your reaction functions by OLS, collect these 
with -- if applicable -- identities and solve this model on a per period basis 
by applying for instance the Gauß-Seidel algorithm.
As an aside, one can cast these type models with the Fair-Parke program and 
call FP from R with system(). This last option is probably more appropriate for 
larger macroeconomic models. For more information see:

http://fairmodel.econ.yale.edu/fp/fp.htm   

Best,
Bernhard

 |>  -Original Message-
 |>  From: r-help-boun...@r-project.org 
 |>  [mailto:r-help-boun...@r-project.org] On Behalf Of Downey, Patrick
 |>  Sent: Friday, March 12, 2010 12:53 AM
 |>  To: r-help@r-project.org
 |>  Subject: [R] VAR with contemporaneous effects
 |>  
 |>  Hi,
 |>  
 |>  I would like to estimate a VAR of the form:
 |>  
 |>  Ay_t = By_t-1 + Cy_t-2 + ... + Dx_t + e_t
 |>  
 |>  Where A is a non-diagonal matrix of coefficients, B and C 
 |>  are matricies of
 |>  coefficients and D is a matrix of coefficients for the 
 |>  exogenous variables.
 |>  
 |>  I don't think the package {vars} can do this because I 
 |>  want to include
 |>  contemporaneous cross-variable impacts. 
 |>  
 |>  So I want y1_t to affect y2_t and I think in {vars} I can 
 |>  only have y1_t-1
 |>  affect y2_t. 
 |>  
 |>  {vars} will only allow VARs of the form:
 |>  
 |>  y_t = By_t-1 + Cy_t-2 + ... + Dx_t + e_t
 |>  
 |>  Solutions? Maybe another package? Or maybe I'm thinking 
 |>  about this wrong?
 |>  
 |>  (And I know that I have to put constraints on A to get 
 |>  identification - I'm
 |>  willing to do that).
 |>  
 |>  Thanks,
 |>  Mitch Downey
 |>  
 |>  __
 |>  R-help@r-project.org mailing list
 |>  https://stat.ethz.ch/mailman/listinfo/r-help
 |>  PLEASE do read the posting guide 
 |>  http://www.R-project.org/posting-guide.html
 |>  and provide commented, minimal, self-contained, reproducible code.
 |>  
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adf.test Vs ADF.test...

2009-06-25 Thread Pfaff, Bernhard Dr.
Dear Harry,

to complete the picture, for the packages installed on my machine help.search() 
yielded:

> help.search("Dickey")
Help files with alias or concept or title matching 'Dickey' using fuzzy
matching:


CADFtest::CADFtest  Hansen's Covariate-Augmented Dickey Fuller
(CADF) test
fUnitRoots::DickeyFullerPValues
Dickey-Fuller p Values
tseries::adf.test   Augmented Dickey-Fuller Test
urca::Raotbl1   Data set used by Dickey, Jansen & Thornton
(1994)
urca::Raotbl2   Data set used by Dickey, Jansen & Thornton
(1994)
urca::ur.df Augmented-Dickey-Fuller Unit Root Test
uroot::ADF.rectest  Augmented Dickey-Fuller Recursive Test
uroot::ADF.test Augmented Dickey-Fuller Test


Type '?PKG::FOO' to inspect entry 'PKG::FOO TITLE'.
> 


At least ur.df() in urca and IIRC CADFtest() in CADFtest provide arguments for 
lag selection.

HTH,
Bernhard
 

>-Ursprüngliche Nachricht-
>Von: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] Im Auftrag von DongHongwei
>Gesendet: Donnerstag, 25. Juni 2009 09:27
>An: r-help@r-project.org
>Betreff: Re: [R] adf.test Vs ADF.test...
>
>
>
>Hi, R users,
>I'm using R to test the unit root for my time series data. I 
>just compared the "ADF.test" in "uroot" package and the 
>"adf.test" in "tseries" package. It seems it is difficult to 
>define the time trend and intercept in "adf.test". But it is 
>easy to do these in "ADF.test". "ADF.test" also help you find 
>the number of lags that need to be included in the model to 
>remove the serial correlation. I do not not see "adf.test" be 
>able to do this too. You need to define the number of lags for 
>adf.test. 
>  I'm a new R user and I could be wrong. I'll appreciate it 
>very much if someone familiar with time series in R can give 
>me some comments and suggestions. Thanks in advance.
>
>  Harry
>
>
>> From: dongh...@hotmail.com
>> To: patrick.richard...@vai.org; r-help@r-project.org
>> Date: Wed, 24 Jun 2009 19:22:46 +
>> Subject: Re: [R] Why can't I use ADF.test?
>> 
>> 
>> Thanks! I tried these, but I got the following 
>messages:Warning message:In getDependencies(pkgs, 
>dependencies, available, lib) :  package ‘uroot’ 
>is not available
>> Error in library(uroot) : there is no package called 'uroot'
>> 
>> 
>> I download a package called "uroot" and put it into: 
>C:\Program Files\R\R-2.9.0\library. But still got the same 
>error. Any further suggestions?
>>   Hongwei 
>> 
>> 
>> > From: patrick.richard...@vai.org
>> > To: dongh...@hotmail.com; r-help@r-project.org
>> > Date: Wed, 24 Jun 2009 15:12:22 -0400
>> > Subject: RE: [R] Why can't I use ADF.test?
>> > 
>> > Since you provided no code, the following is just a guess,  Try:
>> > 
>> > install.packages("uroot")
>> > library(uroot)
>> > 
>> > Then try your analysis again.
>> > 
>> > 
>> > -Original Message-
>> > From: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] On Behalf Of DongHongwei
>> > Sent: Wednesday, June 24, 2009 3:06 PM
>> > To: r-help@r-project.org
>> > Subject: [R] Why can't I use ADF.test?
>> > 
>> > 
>> > Greetings!
>> > I'm trying to use R to test the unit root for a univariate 
>data. By this link: 
>> > http://rss.acs.unt.edu/Rdoc/library/uroot/html/ADF.test.html
>> >  it tells me that I can use the function ADF.test(). 
>However, when I tried this in R, I got this message:
>> > "Error: could not find function "ADF.test"".
>> > I'm confused by this. Anyone could give me some hints? Thanks.
>> > Hongwei
>> > 
>> > 
>> > 
>> > _
>> > 
>´ò¹¤£¬ÕõÇ®£¬Âò·
>¿×Ó£¬¿ìÀ´MClubÒ»Æ
>𡱽ðÎݲؽ¿¡±£¡
>> > 
>> >[[alternative HTML version deleted]]
>> > 
>> > This email message, including any attachments, is for the 
>sole use of the intended recipient(s) and may contain 
>confidential information.  Any unauthorized review, use, 
>disclosure or distribution is prohibited.  If you are not the 
>intended recipient(s) please contact the sender by reply email 
>and destroy all copies of the original message.  Thank you.
>> 
>> _
>> Messenger安全保æŠ
>¤ä¸­å¿ƒï¼Œå…è´¹ä¿®å¤ç³»ç»Ÿæ¼
洞,保护Messenger安全!
>> http://im.live.cn/safe/
>>  [[alternative HTML version deleted]]
>> 
>
>_
>打工,挣钱,买房子,快来MClub一起”金屋藏娇”!
>http://club.msn.cn/?from=10
>   [[alternative HTML version deleted]]
>
>
*
Confidentiality Note: The information contained in this message,
and an

Re: [R] Simulation of VAR

2010-03-29 Thread Pfaff, Bernhard Dr.
Dear Ron,

have you had a look at the package dse? Here, ARMA models can be
specified and simulated. The only exercise left for you, is to transform
the VECM coefficients into their level-VAR values. 

Best,
Bernhard 

 |>  -Original Message-
 |>  From: r-help-boun...@r-project.org 
 |>  [mailto:r-help-boun...@r-project.org] On Behalf Of Ron_M
 |>  Sent: Saturday, March 27, 2010 12:14 PM
 |>  To: r-help@r-project.org
 |>  Subject: [R] Simulation of VAR
 |>  
 |>  
 |>  Dear all, is there any package/function available which simulates a
 |>  co-integrating VAR model once the model parameters are 
 |>  input over some
 |>  arbitrary horizon? Please let me know anyone aware of that.
 |>  
 |>  Thanks
 |>  -- 
 |>  View this message in context: 
 |>  http://n4.nabble.com/Simulation-of-VAR-tp1693295p1693295.html
 |>  Sent from the R help mailing list archive at Nabble.com.
 |>  
 |>  __
 |>  R-help@r-project.org mailing list
 |>  https://stat.ethz.ch/mailman/listinfo/r-help
 |>  PLEASE do read the posting guide 
 |>  http://www.R-project.org/posting-guide.html
 |>  and provide commented, minimal, self-contained, reproducible code.
 |>  
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sweave() within a function: objects not found

2009-11-11 Thread Pfaff, Bernhard Dr.
Dear list subscriber,

suppose, I do have a minimal Sweave file 'test.Rnw':
\documentclass{article}
\begin{document}
<>=
x
@ 
\end{document}


Within R, I define the following function:

f <- function(x){
  Sweave("test.Rnw")
}

The call:

f(x = 1:10)

results in the following error message:

> f(x = 1:10)
Writing to file test.tex
Processing code chunks ...
 1 : echo term verbatim (label=printx)

Error:  chunk 1 (label=printx) 
Error in eval(expr, envir, enclos) : object 'x' not found

In principle, I could assign x to the global environment and then the Sweave 
file will be processed correctly:

> f2 <- function(x){
+   attach(list(x = x))
+   Sweave("test.Rnw")
+ }
> f2(x = 1:10)
Writing to file test.tex
Processing code chunks ...
 1 : echo term verbatim (label=printx)

You can now run LaTeX on 'test.tex'
> 

Kind of a dum question, but how could it be achieved that Sweave recognizes the 
objects within this function call?

Any pointers are most welcome,
Bernhard

> sessionInfo()
R version 2.10.0 (2009-10-26) 
i386-pc-mingw32 

locale:
[1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252   
[3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C   
[5] LC_TIME=German_Germany.1252

attached base packages:
[1] stats graphics  datasets  utils grDevices methods   base 

other attached packages:
[1] fortunes_1.3-6
> 

Dr. Bernhard Pfaff
Director
Global Quantitative Equity

Invesco Asset Management Deutschland GmbH
An der Welle 5
D-60322 Frankfurt am Main

Tel: +49 (0)69 29807 230
Fax: +49 (0)69 29807 178
www.institutional.invesco.com
Email: bernhard_pf...@fra.invesco.com

Geschäftsführer: Karl Georg Bayer, Bernhard Langer, Dr. Jens Langewand, 
Alexander Lehmann, Christian Puschmann
Handelsregister: Frankfurt am Main, HRB 28469
Sitz der Gesellschaft: Frankfurt am Main

*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sweave() within a function: objects not found

2009-11-12 Thread Pfaff, Bernhard Dr.
>
>On 11/11/2009 12:09 PM, Pfaff, Bernhard Dr. wrote:
>> Dear list subscriber,
>> 
>> suppose, I do have a minimal Sweave file 'test.Rnw':
>> \documentclass{article}
>> \begin{document}
>> <>=
>> x
>> @ 
>> \end{document}
>> 
>> 
>> Within R, I define the following function:
>> 
>> f <- function(x){
>>   Sweave("test.Rnw")
>> }
>> 
>> The call:
>> 
>> f(x = 1:10)
>> 
>> results in the following error message:
>> 
>>> f(x = 1:10)
>> Writing to file test.tex
>> Processing code chunks ...
>>  1 : echo term verbatim (label=printx)
>> 
>> Error:  chunk 1 (label=printx) 
>> Error in eval(expr, envir, enclos) : object 'x' not found
>> 
>> In principle, I could assign x to the global environment and 
>then the Sweave file will be processed correctly:
>> 
>>> f2 <- function(x){
>> +   attach(list(x = x))
>> +   Sweave("test.Rnw")
>> + }
>>> f2(x = 1:10)
>> Writing to file test.tex
>> Processing code chunks ...
>>  1 : echo term verbatim (label=printx)
>> 
>> You can now run LaTeX on 'test.tex'
>>> 
>> 
>> Kind of a dum question, but how could it be achieved that 
>Sweave recognizes the objects within this function call?
>
>The way you did it is close.  You can attach all the local 
>variables by 
>using
>
>attach(environment())
>
Dear Duncan,

many thanks for your swift reply and pointers.

>though global variables will take precedence, because attach puts the 
>environment 2nd in the search list.  And you'd better remember 
>to detach 
>them.
>

sure.

>I'd say it's better to make Sweave files self-contained, so 
>that you can 
>run R CMD Sweave outside of R, and get the right results.  

Right, this is ordinarily the way I proceed, too. But, now, I would like to 
include "ready-made" Sweave reports in a package, e.g. 
/inst/reports/template1.Rnw, and provide a function, say reportgen(), with 
which a user can be provide his objects that are then used within the Sweave 
file(s).

But if you 
>really want to do this, then you can write your own Sweave driver and 
>replace the default  RweaveEvalWithOpt  with a function that looks 
>elsewhere for variables.
>

Having said the above, I think that this will be the route to take and indeed 
in RweaveEvalWithOpt is the line:

res <- try(withVisible(eval(expr, .GlobalEnv)), silent = TRUE)

which needs to be adjusted.

Best,
Bernhard

>Duncan Murdoch
>
>> 
>> Any pointers are most welcome,
>> Bernhard
>> 
>>> sessionInfo()
>> R version 2.10.0 (2009-10-26) 
>> i386-pc-mingw32 
>> 
>> locale:
>> [1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252   
>> [3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C   
>> [5] LC_TIME=German_Germany.1252
>> 
>> attached base packages:
>> [1] stats graphics  datasets  utils grDevices 
>methods   base 
>> 
>> other attached packages:
>> [1] fortunes_1.3-6
>>> 
>> 
>> Dr. Bernhard Pfaff
>> Director
>> Global Quantitative Equity
>> 
>> Invesco Asset Management Deutschland GmbH
>> An der Welle 5
>> D-60322 Frankfurt am Main
>> 
>> Tel: +49 (0)69 29807 230
>> Fax: +49 (0)69 29807 178
>> www.institutional.invesco.com
>> Email: bernhard_pf...@fra.invesco.com
>> 
>> Geschäftsführer: Karl Georg Bayer, Bernhard Langer, Dr. Jens 
>Langewand, Alexander Lehmann, Christian Puschmann
>> Handelsregister: Frankfurt am Main, HRB 28469
>> Sitz der Gesellschaft: Frankfurt am Main
>> 
>> *
>> Confidentiality Note: The information contained in this 
>...{{dropped:10}}
>> 
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] test for causality

2009-11-17 Thread Pfaff, Bernhard Dr.
>
>
>Hi useRs..
>
>I cant figure out how to test for causality using causality() in vars
>package
>
>I have two datasets (A, B) and i want to test if A (Granger)cause B.
>How do I write the script? I dont understand ?causality. How 

Dear Tobias,

have a look at example(causality). A Granger-causality test is a F-test.
You need to estimate a VAR first, i.e., provide the complete data set
(cbind(A, b)) in VAR() and then provide the causing variable(s) in
causality.

Best,
Bernhard


>do I get x to
>"contain" A and B. Further using the command VAR() to specify x, I dont
>either understand.
>
>Kind regards Tobias
>
>-- 
>View this message in context: 
>http://old.nabble.com/test-for-causality-tp26373931p26373931.html
>Sent from the R help mailing list archive at Nabble.com.
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Dickey-Fuller Tests with no constant and no trend

2009-05-18 Thread Pfaff, Bernhard Dr.
Dear Jake,

have you had a look at the function 'ud.df()' contained in the package urca? 
You will find:

> library(urca)
> args(ur.df)
function (y, type = c("none", "drift", "trend"), lags = 1, selectlags = 
c("Fixed", 
"AIC", "BIC")) 

HTH,
Bernhard  

>-Ursprüngliche Nachricht-
>Von: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] Im Auftrag von jbrukh
>Gesendet: Freitag, 15. Mai 2009 20:37
>An: r-help@r-project.org
>Betreff: [R] Dickey-Fuller Tests with no constant and no trend
>
>
>R has a Dickey-Fuller Test implementation (adf.test) that 
>tests for unit
>roots in an autoregressive process with a constant and linear 
>trend.  Is
>there a DF implementation that doesn't use the constant or trend?
>
>Thanks,
>Jake. 
>
>-- 
>View this message in context: 
>http://www.nabble.com/Dickey-Fuller-Tests-with-no-constant-and-
>no-trend-tp23565210p23565210.html
>Sent from the R help mailing list archive at Nabble.com.
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Finding cointegration relations in a VAR(1)

2009-06-04 Thread Pfaff, Bernhard Dr.

>Von: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] Im Auftrag von 
>severine.gai...@unil.ch
>Gesendet: Donnerstag, 4. Juni 2009 01:43
>An: r-h...@stat.math.ethz.ch
>Betreff: [R] Finding cointegration relations in a VAR(1)
>
>Dear R people,
>
>I am trying to find the cointegration relations in a VAR(1).
>The ca.jo function conducts the Johansen procedure, but we
>have to specify at least 2 lags. How should I do if I want
>to include only one lag?
>

hello Severine,

the lag argument refers to the level-VAR. This means that you end up
with one lag of the endoegnous variables in your VECM. In case you do
want to specify a VECM without any lagged endogenous variables, you have
to calculate the reduced rank regressions by hand and derive the
relevant tests from there. You can partly employ the code in ca.jo() in
order to do so. 

Best,
Bernhard

>Thank you very much for your help!
>
>Have a nice day,
>
>Severine Gaille
>
>_
>
>Ecole des HEC
>Universite de Lausanne
>CH-1015 Lausanne
>Tel (+41 21) 692 33 76
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Copula package

2009-04-22 Thread Pfaff, Bernhard Dr.
Dear Roslina,

question: have you used 'library(copula)' somewhere before the call to 
'normalCopula'?

Bernhard


>-Ursprüngliche Nachricht-
>Von: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] Im Auftrag von Roslina Zakaria
>Gesendet: Mittwoch, 22. April 2009 09:45
>An: r-help@r-project.org
>Betreff: [R] Copula package
>
>
>Hi R-users,
>
>I would like to use the copula package.  I  the package plus 
>the mvtnorm and try to run the example given, but I got the 
>following message:
>
>install.packages(repos=NULL,pkgs="c:\\Tinn-R\\copula_0.8-3.zip")
>norm.cop <- normalCopula(c(0.5, 0.6, 0.7), dim = 3, dispstr = "un")
>t.cop <- tCopula(c(0.5, 0.3), dim = 3, dispstr = "toep",
>df = 2, df.fixed = TRUE)
>## from the wrapper
>norm.cop <- ellipCopula("normal", param = c(0.5, 0.6, 0.7),
>dim = 3, dispstr = "un")
>
>> install.packages(repos=NULL,pkgs="c:\\Tinn-R\\copula_0.8-3.zip")
>package 'copula' successfully unpacked and MD5 sums checked
>updating HTML package descriptions
>> norm.cop <- normalCopula(c(0.5, 0.6, 0.7), dim = 3, dispstr = "un")
>Error: could not find function "normalCopula"
>> t.cop <- tCopula(c(0.5, 0.3), dim = 3, dispstr = "toep",
>+ df = 2, df.fixed = TRUE)
>Error: could not find function "tCopula"
>> norm.cop <- ellipCopula("normal", param = c(0.5, 0.6, 0.7),
>+ dim = 3, dispstr = "un")
>Error: could not find function "ellipCopula"
>
>
>I'm not sure what is wrong.  Thank you so much for any help given.
>
>Regards,
>
>Roslina
>
>
>
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Out-of-sample prediction with VAR

2010-02-08 Thread Pfaff, Bernhard Dr.
Hello Peter,

by judging from your code snippet:

 |> ts_Y <- ts(log_residuals[1:104]); # detrended sales data
 |> ts_XGG <- ts(salesmodeldata$gtrends_global[1:104]);
 |> ts_XGL <- ts(salesmodeldata$gtrends_local[1:104]);
 |> training_matrix <- data.frame(ts_Y, ts_XGG, ts_XGL);
 |>  
 |> ### Try VAR(3)
 |> var_model <- VAR (y=training_matrix, p=3, 
 |>  type="both", season=NULL,
 |>  exogen=NULL,  lag.max=NULL);


you have one endogenous variable, namely ts_Y, and two exgoenous
variables, namely ts_XGG and ts_XGL. Now, how you have set up
'training_matrix' all three variables are treated as endogenous (see
?VAR for more information).
What you really want to estimate and predict is a **univariate** AR(3)
model with two exogenous variables. For these type of models VAR() is
not the right function, but you could rather use lm() and/or dynlm().
The forcasts should then be computed recursively.

Best,
Bernhard

 |>  -Original Message-
 |>  From: r-help-boun...@r-project.org 
 |>  [mailto:r-help-boun...@r-project.org] On Behalf Of 
 |>  pe...@linelink.nl
 |>  Sent: Sunday, February 07, 2010 11:37 PM
 |>  To: r-help@r-project.org
 |>  Subject: [R] Out-of-sample prediction with VAR
 |>  
 |>  Good day,
 |>  
 |>  I'm using a VAR model to forecast sales with some extra 
 |>  variables (google
 |>  trends data). I have divided my dataset into a trainingset 
 |>  (weekly sales +
 |>  vars in 2006 and 2007) and a holdout set (2008).
 |>  It is unclear to me how I should predict the out-of-sample 
 |>  data, because
 |>  using the predict() function in the vars package seems to 
 |>  estimate my
 |>  google trends vars as well. However, I want to forecast 
 |>  the sales figures,
 |>  with knowledge of the actual google trends data.
 |>  
 |>  My questions:
 |>  1. How should I do this? I currently extract the linear 
 |>  model generated by
 |>  the VAR(3) function to predict the holdout set, but that seems
 |>  inappropriate?
 |>  2. In case that I am doing it right, how is it possible that a
 |>  automatically fitted model with more variables actually 
 |>  performs less good
 |>  (in terms of MAPE)? Shouldn't it at least predict just as 
 |>  well as the
 |>  simple AR(3) by finding that the extra variables have no 
 |>  added value?
 |>  
 |>  My code:
 |>  
 |> ts_Y <- ts(log_residuals[1:104]); # detrended sales data
 |> ts_XGG <- ts(salesmodeldata$gtrends_global[1:104]);
 |> ts_XGL <- ts(salesmodeldata$gtrends_local[1:104]);
 |> training_matrix <- data.frame(ts_Y, ts_XGG, ts_XGL);
 |>  
 |> ### Try VAR(3)
 |> var_model <- VAR (y=training_matrix, p=3, 
 |>  type="both", season=NULL,
 |>  exogen=NULL,  lag.max=NULL);
 |>  
 |> ## Out of sample forecasting
 |> var.lm = lm(var_model$varresult$ts_Y); # the 
 |>  generated LM
 |>  
 |> ts_Y <- ts(log_residuals[105:155]);
 |> ts_XGG <- ts(salesmodeldata$gtrends_global[105:155]);
 |> ts_XGL <- ts(salesmodeldata$gtrends_local[105:155]);
 |>  
 |> # Notice how I manually create the lagged 
 |>  values to be used in the
 |>  Linear Model
 |> holdout_matrix <- 
 |>  na.omit(data.frame(ts.union(ts_Y, ts_XGG, ts_XGL,
 |>  ts_Y.l1 = lag(ts_Y,-1), ts_Y.l2 = lag(ts_Y,-2), ts_Y.l3 = 
 |>  lag(ts_Y,-3),
 |>  ts_XGG.l1 = lag(ts_XGG,-1), ts_XGG.l2 = lag(ts_XGG,-2), ts_XGG.l3 =
 |>  lag(ts_XGG,-3), ts_XGL.l1 = lag(ts_XGL,-1), ts_XGL.l2 = 
 |>  lag(ts_XGL,-2),
 |>  ts_XGL.l3 = lag(ts_XGL,-3), const=1, trend=0.0001514194  )));
 |>  
 |> var.predict = predict(object=var_model, 
 |>  n.ahead=52, dumvar=holdout_matrix);
 |>  
 |> ## Assess accuracy
 |> calc_mape (holdout_matrix$ts_Y, var.predict, 
 |>  islog=T, print=T)
 |>  
 |>  Some context:
 |>  For my Master's thesis I'm using R to test the predictive 
 |>  power of web
 |>  metrics (such as google trends data & pageviews) in sales 
 |>  forecasting. To
 |>  properly assess this, I employ a simple AR model (for time 
 |>  series without
 |>  the extra variables) and a VAR model for the predictions 
 |>  with the extra
 |>  variables. I also develop a random forest with, and 
 |>  without the buzz
 |>  variables and see if MAPE improves.
 |>  
 |>  Many thanks in advance!
 |>  
 |>  __
 |>  R-help@r-project.org mailing list
 |>  https://stat.ethz.ch/mailman/listinfo/r-help
 |>  PLEASE do read the posting guide 
 |>  http://www.R-project.org/posting-guide.html
 |>  and provide commented, minimal, self-contained, reproducible code.
 |>  
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained

Re: [R] Problem launching Rcmdr

2020-03-11 Thread Pfaff, Bernhard Dr.
Good catch, Peter; Cylance might be the culprit - at least I encountered 
problems by compiling C++ sources and/or building packages with interfaced 
routines and here a memory checker kicked in.
Maybe something akin is happening by launching Rcmdr (tcl/tk)?

-Ursprüngliche Nachricht-
Von: R-help  Im Auftrag von Peter Dalgaard
Gesendet: Mittwoch, 11. März 2020 10:29
An: Fox, John 
Cc: r-help@r-project.org
Betreff: [EXT] Re: [R] Problem launching Rcmdr

Any chance that a virus checker is interfering?

-pd

> On 10 Mar 2020, at 23:43 , Fox, John  wrote:
> 
> Dear Brian,
> 
> (Please keep r-help in the loop so that if someone else has this 
> problem they'll have something to refer to.)
> 
> Your session at start-up seems completely clean, so I'm at a loss to 
> understand what the problem is. I, and I assume very many other people, are 
> using the Rcmdr with essentially the same Windows setup. What's particularly 
> hard for me to understand is that you're able to start the Rcmdr in a second 
> R session. Does the first R session have to remain open for this to work?
> 
> A next step is to reinstall packages, starting with the Rcmdr package, if you 
> haven't already tried that, and eventually to reinstall R, including deleting 
> the R package library. BTW, I usually prefer to install R in c:\R\ rather 
> than under Program Files so that the system library is used for packages that 
> I subsequently install, although it should work perfectly fine to install 
> packages into a personal library.
> 
> Best,
> John
> 
>> -Original Message-
>> From: Brian Grossman 
>> Sent: Tuesday, March 10, 2020 5:07 PM
>> To: Fox, John 
>> Subject: Re: [R] Problem launching Rcmdr
>> 
>> John,
>> 
>> Thanks for the reply. Here is the output from running sessionInfo() 
>> right after opening R.
>> 
>>> sessionInfo()
>> R version 3.6.2 (2019-12-12)
>> Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 
>> x64 (build 18362)
>> 
>> Matrix products: default
>> 
>> locale:
>> [1] LC_COLLATE=English_United States.1252 [2] LC_CTYPE=English_United
>> States.1252 [3] LC_MONETARY=English_United States.1252 [4] 
>> LC_NUMERIC=C [5] LC_TIME=English_United States.1252
>> 
>> attached base packages:
>> [1] stats graphics  grDevices utils datasets  methods   base
>> 
>> loaded via a namespace (and not attached):
>> [1] compiler_3.6.2
>> 
>> 
>> Brian
>> 
>> On Tue, Mar 10, 2020 at 8:46 AM Fox, John >  > wrote:
>> 
>> 
>>  Dear Brian,
>> 
>>  Normally I'd expect that a workspace saved from a previous session 
>> and loaded at the start of the current session would cause this kind 
>> of anomalous behaviour, but that doesn't explain why the Rcmdr starts 
>> up properly in a second (concurrent?) session, nor why it doesn't 
>> start up properly when R is run with the --vanilla switch.
>> 
>>  Can you report the result of sessionInfo() at the start of a session?
>> 
>>  If all else fails, you could try uninstalling and reinstalling R and 
>> packages.
>> 
>>  Best,
>>   John
>> 
>>-
>>John Fox, Professor Emeritus
>>McMaster University
>>Hamilton, Ontario, Canada
>>Web: http::/socserv.mcmaster.ca/jfox 
>> > .ca_jfox&d=DwIFAg&c=MWFkEADu9ctt4KEmLIuwsQ&r=OhbxYoJKYsmfC2WjzTKFj7Fw
>> 1qyrbJIFo5BfgVqACaM&m=385Kfv02lZxykzhYHnbjRxxzMBywESQnuptbnhDtfTc&s=a
>> eoHPeyoWEbF1Lq409kfGNS_AauUuCunIr_B_-mNx68&e= >
>> 
>>  > On Mar 9, 2020, at 3:25 PM, Brian Grossman >  > wrote:
>>  >
>>  > I'm having a problem with launching Rcmdr. When I try to launch it 
>> the
>>  > first time through R using the command library(Rcmdr) it will go 
>> through
>>  > the process of launching and get to the point where it says
>>  >
>>  > "Registered S3 methods overwritten by 'lme4':
>>  >  method  from
>>  >  cooks.distance.influence.merMod car
>>  >  influence.merModcar
>>  >  dfbeta.influence.merMod car
>>  >  dfbetas.influence.merModcar
>>  > lattice theme set by effectsTheme()
>>  > See ?effectsTheme for details."
>>  >
>>  > and then it just hangs there and never launches Rcmdr. If you 
>> launch
>>  > another instance of R and run the same command it will complete 
>> and launch
>>  > Rcmdr successfully. I have tried launching R with R.exe --vanilla 
>> with the
>>  > same results.
>>  >
>>  > The system information is Windows 10 version 1903, i5 8500 
>> processor, 8GB
>>  > RAM, 256Gb  SSD. R version 3.6.2 Platform: x86_64-w64-
>> mingw32/x64 (64-bit)
>>  >
>>  > Hopefully I haven't left out any important information. Thank you 
>> for any
>>  > suggestions.
>>  >
>>  >   [[alternative HTML version deleted]]
>>  >
>>  > 

Re: [R] VAR function in vars package: find the standard deviation of the error

2013-08-05 Thread Pfaff, Bernhard Dr.
library(vars)
data(Canada)
mod <- VAR(Canada, p = 2, type = "both")
apply(resid(mod), 2, sd)

See also, ?summary and in particular the returned list element 'covres'.

HTH,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von jpm miao
Gesendet: Freitag, 2. August 2013 11:11
An: r-help
Betreff: [R] VAR function in vars package: find the standard deviation of the 
error

Hi,

   Does someone know how to find the standard deviation of the error term in 
the VAR object? The whole structure of the VAR is attached.


   Thanks,
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sorting the VAR model output according to variable names??

2013-04-10 Thread Pfaff, Bernhard Dr.
Dear LondonPhd,

assuming that you have assigned 'mod' to your VAR() call, you can run the 
following:

lapply(coef(mod), function(x) x[sort(rownames(x)), ])

In general, the coef-method will retrieve the estimated coefficients and you 
can then do the reordering to your liking.

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von londonphd
Gesendet: Dienstag, 9. April 2013 16:59
An: r-help@r-project.org
Betreff: [R] sorting the VAR model output according to variable names??

I was wondering if one can have the coefficients of VAR model sorted according 
to variable names rather than lags. If you notice below, the output is sorted 
according to lags.

>VAR(cbind(fossil,labour),p=2,type="const")

VAR Estimation Results:
=== 

Estimated coefficients for equation fossil: 
===
Call:
fossil = fossil.l1 + labour.l1 + fossil.l2 + labour.l2 + const 

 fossil.l1  labour.l1  fossil.l2  labour.l2  const 
 0.4686535 -0.5324335  0.2308964  0.8777865 -0.6711897 

Estimated coefficients for equation labour: 
===
Call:
labour = fossil.l1 + labour.l1 + fossil.l2 + labour.l2 + const 

  fossil.l1   labour.l1   fossil.l2   labour.l2   const 
 0.01431961  0.99648957  0.04160058 -0.11316312  1.11396823


If you take the last equation above (labour equation) the output is given as
follows:
fossil.l1   labour.l1   fossil.l2   labour.l2  const


is there any way I can have the output of this equation as follows:
fossil.l1  fossil.l2  labour.l1   labour.l2  const


It makes it easy to do hypothesis testing on specific lagged coefficients.



--
View this message in context: 
http://r.789695.n4.nabble.com/sorting-the-VAR-model-output-according-to-variable-names-tp4663770.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] The weak exogeneity test in R for the Error Correction Model?

2013-06-04 Thread Pfaff, Bernhard Dr.
Hello Rebecca,

Set up your your model as a bivariate VECM (use ca.jo() and create a matrix of 
your x and y variables) and invoke alrtest() on the returned object as already 
mentioned by you. See the example section of alrtest for how accomplishing this.

Best,
Bernhard
Dr. Bernhard Pfaff
Director
Global Asset Allocation

Invesco Asset Management Deutschland GmbH
An der Welle 5
D-60322 Frankfurt am Main

Tel: +49 (0)69 29807 230
Fax: +49 (0)69 29807 178
www.institutional.invesco.com
Email: bernhard_pf...@fra.invesco.com

Geschäftsführer: Karl Georg Bayer, Bernhard Langer, Dr. Jens Langewand, 
Alexander Lehmann, Christian Puschmann
Handelsregister: Frankfurt am Main, HRB 28469
Sitz der Gesellschaft: Frankfurt am Main




- Originalnachricht -
Von: Yuan, Rebecca [mailto:rebecca.y...@bankofamerica.com]
Gesendet: Tuesday, May 28, 2013 05:16 PM
An: R help 
Betreff: [R] The weak exogeneity test in R for the Error Correction Model?

Hello all,

I would like to carry out a single-equation approach of the Error Correction 
Model such as

Delta_y(t) = a + b*y(t-1) + c*x1(t-1) + d*x2(t-1) + e*delta_x1(t) + 
f*delta_x2(t) + epsilon(t)

Where, a, b, c, d, e, f are coefficients to be estimated, y is the dependent 
variable, and x1, x2 are independent variables.

For the single equation approach of ECM, there is a requirement of the weak 
exogeneity. How could I carry out the test to see if there is weak exogeneity 
in the above system?

I read the book "Bernhard-Analysis of Integrated and Cointegrated Time Series" 
where in section 8.1.3 it uses alrtest() for the weak exogeneity test. But that 
is for the vector ECM, where y is of five components, where in my example, y is 
a scalar, only one component. What would be the best way for me to test the 
weak exogeneity for the above approach ECM?

http://books.google.com/books?id=ca5MkRbF3fYC

Thanks very much!

Cheers,

Rebecca

--
This message, and any attachments, is for the intended r...{{dropped:18}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Diagnostic testing in a VEC

2012-10-09 Thread Pfaff, Bernhard Dr.
Hello Laura,

you convert your VEC model to its levl-VAR representation and employ the 
diagnostic tests you mentioned. This can be accomplished with the 
functions/methods contained in the package 'vars'. You might want to have a 
look at the vignette of the latter package.

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von Laura Catalina Echeverri Guzmán
Gesendet: Montag, 8. Oktober 2012 17:34
An: r-help@r-project.org
Betreff: [R] Diagnostic testing in a VEC

Hi everyone,

I'm using the Johansen framework to determine a VEC using package urca. I have 
estimated the corresponding VEC using likelihood ratio test for restrictions on 
alpha, beta or both and I have generated objects of the class cajo.test. Now I 
want to diagnostic tests in the model, like heteroskedasticity test, residuals 
normality and serial autocorrelation, but I cannot find the way to do it in 
objects like the one I have. How can I do it? what packages/methods I may use? 
Do I have to transform this object in other object to do it easily?

I would appreciate if someone could help me with this issue.

Thank you!

--
Laura Catalina Echeverri Guzmán

[[alternative HTML version deleted]]

*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] package vars doesn´t working

2010-10-19 Thread Pfaff, Bernhard Dr.
Dear Claudio,

hard to tell without further information, but I reckon that you:

1) have a secondary library in use
2) have installed the packages 'vars' **and** 'MASS' installed into this 
secondary library

If so, remove the package 'MASS' from this secondary library (it's shipped in 
the standard library in your R installation already). Hint: check whether you 
have installed other recommended packages into your secondary library and if 
so, remove these, too. Otherwise you migt encounter the same problem with 
packages that do depend on the ones that are already shipped in the primary 
library of your R installation.

This all is just a guess, because you have not provided enough information to 
diagnose further.

Best,
Bernhard 

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von Claudio Shikida ( ?)
Gesendet: Dienstag, 19. Oktober 2010 10:19
An: r-help@r-project.org
Betreff: [R] package vars doesn´t working

Hello,

I was using R (v.2.11.1, 32 bits) and I did the upgrade to R (v.2.12.0, 64 
bits). I followed the instructions in Rs FAQ (Whats the best way to upgrade, 
question 2.8) and updated my packages. However, now, I cant use the library 
"vars". When I call it, there is an error message concerning the package "MASS" 
which couldnt be updated because it seems to be no more available in Rs 
repositories.

Is this a problem with "vars"? Maybe it will have to be updated soon with no 
more need to ask for "MASS"? Is there another way to invoke "vars"?

Thanks for your time and attention

--
http://shikida.net  and http://works.bepress.com/claudio_shikida/

Esta mensagem pode conter informao confidencial e/ou privilegiada. Se voc no 
for o destinatrio ou a pessoa autorizada a receber esta mensagem, no poder 
usar, copiar ou divulgar as informaes nela contidas ou tomar qualquer ao 
baseada nessas informaes. Se voc recebeu esta mensagem por engano, por favor 
avise imediatamente o remetente, respondendo o presente e-mail e apague-o em 
seguida.
This message may contain confidential and/or privileged ...{{dropped:9}}

*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Dickey Fuller Test

2010-10-29 Thread Pfaff, Bernhard Dr.
Dear Cuckovic,

although you got already an answer to your post that relates a little bit more 
on the time series characteristics of your data in question; I will take up on 
your initial question. Basically, you got trapped by the word 'time series' in 
the documentation for adf.test(). What is meant, is an object of informal class 
ts, hence:

YYY <- as.ts(Y)
adf.test(Y)
adf.test(YYY)

does yield the same result. Now, what's happening if an object of formal class 
timeSeries is inserted? Well, have a look at adf.test directly: adf.test

Here, you will see that the series becomes differenced, but this operation is 
applied differently for numeric/ts objects viz. timeSeries objects; check:

showMethods(diff)

and/or

diff(Y)
diff(YY)
diff(YYY)

Now, to rectify your results, use:

adf.test(series(YY)) instead. Here, the data part of your timeSeries object is 
extracted only and hence the same method for diff() is used as in the case of 
numeric/ts objects.

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von Cuckovic Paik
Gesendet: Freitag, 29. Oktober 2010 05:48
An: r-help@r-project.org
Betreff: [R] Dickey Fuller Test


Dear Users, please help with the following DF test:
=
library(tseries)
library(timeSeries)

Y=c(3519,3803,4332,4251,4661,4811,4448,4451,4343,4067,4001,3934,3652,3768
,4082,4101,4628,4898,4476,4728,4458,4004,4095,4056,3641,3966,4417,4367
,4821,5190,4638,4904,4528,4383,4339,4327,3856,4072,4563,4561,4984,5316
,4843,5383,4889,4681,4466,4463,4217,4322,4779,4988,5383,5591,5322,5404
,5106,4871,4977,4706,4193,4460,4956,5022,5408,5565,5360,5490,5286,5257
,5002,4897,4577,4764,5052,5251,5558,5931,5476,5603,5425,5177,4792,4776
,4450,4659,5043,5233,5423,5814,5339,5474,5278,5184,4975,4751,4600,4718
,5218,5336,5665,5900,5330,5626,5512,5293,5143,4842,4627,4981,5321,5290
,6002,5811,5671,6102,5482,5429,5356,5167,4608,4889,5352,5441,5970,5750
,5670,5860,5449,5401,5240,5229,4770,5006,5518,5576,6160,6121,5900,5994
,5841,5832,5505,5573,5331,5355,6057,6055,6771,6669,6375,,6383,6118
,5927,5750,5122,5398,5817,6163,6763,6835,6678,6821,6421,6338,6265,6291
,5540,5822,6318,6268,7270,7096,6505,7039,6440,6446,6717,6320)

YY=as.timeSeries(Y)

adf.test(Y)
adf.test(YY)
  Output 
> adf.test(Y)

Augmented Dickey-Fuller Test

data:  Y
Dickey-Fuller = -6.1661, Lag order = 5, p-value = 0.01 alternative hypothesis: 
stationary 

Warning message:
In adf.test(Y) : p-value smaller than printed p-value
> adf.test(YY)

Augmented Dickey-Fuller Test

data:  YY
Dickey-Fuller = 12.4944, Lag order = 5, p-value = 0.99 alternative hypothesis: 
stationary 

Warning message:
In adf.test(YY) : p-value greater than printed p-value
> 
==
Question: Why the two results are different?

The help file says that the input series is either a numeric vector or a time 
series object. But the results are completely opposite if the different types 
of arguments are used. Thanks in advance.







--
View this message in context: 
http://r.789695.n4.nabble.com/Dickey-Fuller-Test-tp3018408p3018408.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] question of VECM restricted regression

2011-05-02 Thread Pfaff, Bernhard Dr.
Hello Meilan:

'ect' is shorthand for error-correction-term, 'sd' signify seasonal dummy 
variables and 'LRM.dl1' is the lagged first difference of the variable 'LRM' 
(the log of real money demand).

HTH,
Bernhard 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von Meilan Yan
> Gesendet: Freitag, 29. April 2011 11:10
> An: bernhard.pf...@pfaffikus.de; r-help@r-project.org
> Betreff: [R] question of VECM restricted regression
> 
> Dear Colleague
> 
>   I am trying to figure out how to use R to do OLS restricted 
> VECM regression. However, there are some notation I cannot understand.
> 
> Please tell me what is 'ect',  'sd' and 'LRM.dl1  in the 
> following practice:
> 
> #OLS retricted VECM regression
> data(denmark)
> sjd <- denmark[, c("LRM", "LRY", "IBO", "IDE")]
> sjd.vecm<- ca.jo(sjd, ecdet = "const", type="eigen", K=2, 
> spec="longrun",
> season=4)
> sjd.vecm.rls<-cajorls(sjd.vecm,r=1)
> summary(sjd.vecm.rls$rlm)
> sjd.vecm.rls$beta
> 
> Response LRM.d :
> Call:
> lm(formula = substitute(LRM.d), data = data.mat)
> 
> Residuals:
>   Min1QMedian3Q   Max
> -0.027598 -0.012836 -0.003395  0.015523  0.056034
> 
> Coefficients:
>  Estimate Std. Error t value Pr(>|t|)
> ect1-0.212955   0.064354  -3.309  0.00185 **
> sd1 -0.057653   0.010269  -5.614 1.16e-06 ***
> sd2 -0.016305   0.009177  -1.777  0.08238 .
> sd3 -0.040859   0.008767  -4.660 2.82e-05 ***
> LRM.dl1  0.049816   0.191992   0.259  0.79646
> LRY.dl1  0.075717   0.157902   0.480  0.63389
> IBO.dl1 -1.148954   0.372745  -3.082  0.00350 **
> IDE.dl1  0.227094   0.546271   0.416  0.67959
> 
> > sjd.vecm.rls$beta
>   ect1
> LRM.l21.00
> LRY.l2   -1.032949
> IBO.l25.206919
> IDE.l2   -4.215879
> 
> 
> Many thanks
> Meilan
> 
> 
> 
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MacKinnon critical value

2011-05-06 Thread Pfaff, Bernhard Dr.
Hello Lee,

in addition to David's answer, see: ?MacKinnonPValues in package 'urca' (CRAN 
and R-Forge). 

Best,
Bernhard

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von David Winsemius
> Gesendet: Freitag, 6. Mai 2011 04:46
> An: Lee Schulz
> Cc: R-help@r-project.org
> Betreff: Re: [R] MacKinnon critical value
> 
> 
> On May 5, 2011, at 9:10 PM, Lee Schulz wrote:
> 
> > Hello,
> >
> >
> >
> > I am doing an Engle Granger test on the residuals of two I(1) 
> > processes.
> > I would like to get the MacKinnon (1996) critical value, 
> say at 10%.  
> > I have 273 observations with 5 integrated explanatory 
> variables , so 
> > that k=4.  Could someone help me with the procedure in R?
> 
> See if this helps:
> 
> http://search.r-project.org/cgi-bin/namazu.cgi?query=Engle+Gra
nger+mackinnon&max=100> 
&result=normal&sort=score&idxname=functions&idxname=Rhelp08&id
xname=Rhelp10&idxname=Rhelp02
> 
> --
> David Winsemius, MD
> West Hartford, CT
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimal choice of the threshold u in Peak Over Threshold (POT)Approach

2011-02-11 Thread Pfaff, Bernhard Dr.
Dear Fir,

for instance, have a look at the package 'ismev' and the function mrl.plot(). 
The CRAN task view 'Finance' lists many more packages that address EVT under 
the topic 'Risk management'.

Best,
Bernhard

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von FMH
> Gesendet: Donnerstag, 10. Februar 2011 19:28
> An: r-help@r-project.org
> Betreff: [R] Optimal choice of the threshold u in Peak Over 
> Threshold (POT)Approach
> 
> Dear All,
> 
> Could someone please suggest me the way to calculate the 
> optimal threshold in POT method via any available  packages in R?
> 
> Thanks,
> Fir
> 
> 
> 
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VAR with HAC

2011-02-17 Thread Pfaff, Bernhard Dr.
Hello Marta,

have you read ?coeftest and ? VAR carefully enough? The function does expect a 
lm/glm object for x as argument. Hence, the following does work:

library(vars)
data(Canada)
myvar <- VAR(Canada, p = 2, type = "const")
lapply(myvar$varresult, coeftest)

Best,
Bernhard 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von Marta Lachowska
> Gesendet: Mittwoch, 16. Februar 2011 16:50
> An: r-help@r-project.org
> Betreff: [R] VAR with HAC
> 
> 
> Hello,
> I would like to estimate a VAR model with HAC corrected 
> standard errors. I tried to do this by using the sandwich 
> package, for example: 
>  
> > library(vars)
> > data(Canada)
> > myvar = VAR(Canada, p = 2, type = "const") coeftest(myvar, vcov = 
> > vcovHAC)
> Error in umat - res : non-conformable arrays
>  
> Which suggests that this function is not compatible with the 
> VAR command. Has anyone tried to modify the code to get HAC 
> corrected standard errors with VAR? Any suggestions are welcome. 
>  
> Thank you. 
>  
> Marta
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VAR with HAC

2011-02-17 Thread Pfaff, Bernhard Dr.
Hello Marta,
 
arrg, sorry, I have not carefully enough read your message. Well, if you follow 
the cited thread further down, you will find: 
(and hereby directly quoting from: 
https://stat.ethz.ch/pipermail/r-sig-finance/2009q2/004274.html) 

<> 
On Tue, 9 Jun 2009, Matthieu Stigler wrote:

> Hi
>
> I wasn't aware of the fact that HAC is not designed for time series model 
> (thanks Achim!). But nevertheless I think that HC is still usable, well at > 
> least I saw it in couple of papers dealing with times series.
>
> So if you still want to use an HC, two solutions:
>
> A solve the problem (workaround):

No, there is no "problem" at least not from the "sandwich" point of view. 
If you want "sandwich" to cooperate with "varest" objects, you just need 
to provide the appropriate methods (essentially, bread() and estfun()) for 
"varest" objects. This is a clean and non-invasive solution and not very 
difficult to implement given that all this is OLS.
<>
 
and further down in this reply, Achim is referring you to the package's 
vignette:

<> 
vignette("sandwich-OOP", package = "sandwich")
<>

Best,
Bernhard
 
 
 
 


____

        Von: Marta Lachowska [mailto:ma...@upjohn.org] 
Gesendet: Donnerstag, 17. Februar 2011 17:01
An: Pfaff, Bernhard Dr.; r-help@r-project.org
Betreff: Re: AW: [R] VAR with HAC


Thank you for your hint!

I see that there was a thread discussing implementation of what I 
wanted to do (Newey-West standard errors in a VAR context), but that there is a 
conflict due to how the type = "const" is defined in the VAR command: 
https://stat.ethz.ch/pipermail/r-sig-finance/2009q2/004272.html that appears 
not to be resolved. 
 
Best, 
 
Marta
 

>>> "Pfaff, Bernhard Dr."  2/17/2011 
4:31 AM >>>
Hello Marta,

have you read ?coeftest and ? VAR carefully enough? The function does 
expect a lm/glm object for x as argument. Hence, the following does work:

library(vars)
data(Canada)
myvar <- VAR(Canada, p = 2, type = "const")
lapply(myvar$varresult, coeftest)

Best,
Bernhard 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von Marta Lachowska
> Gesendet: Mittwoch, 16. Februar 2011 16:50
> An: r-help@r-project.org
> Betreff: [R] VAR with HAC
> 
> 
> Hello,
> I would like to estimate a VAR model with HAC corrected 
> standard errors. I tried to do this by using the sandwich 
> package, for example: 
>  
> > library(vars)
> > data(Canada)
> > myvar = VAR(Canada, p = 2, type = "const") coeftest(myvar, vcov = 
> > vcovHAC)
> Error in umat - res : non-conformable arrays
>  
> Which suggests that this function is not compatible with the 
> VAR command. Has anyone tried to modify the code to get HAC 
> corrected standard errors with VAR? Any suggestions are welcome. 
>  
> Thank you. 
>  
> Marta
> 
> [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multivariate Granger Causality Tests

2011-03-03 Thread Pfaff, Bernhard Dr.
Dear Hazzard I. Petzev,

you might find causality() in the package vars useful.

Best,
Bernhard 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von hazzard
> Gesendet: Donnerstag, 3. März 2011 10:07
> An: r-help@r-project.org
> Betreff: [R] Multivariate Granger Causality Tests
> 
> Dear Community,
> 
> For my masters thesis I need to perform a multivariate 
> granger causality test. I have found a code for bivariate 
> testing on this page 
> (http://www.econ.uiuc.edu/~econ472/granger.R.txt), which I 
> think would not be useful for the multivariate case. Does 
> anybody know a code for a multivariate granger causality 
> test. Thank you in advance.
> 
> Best Regards
> 
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Multivariate-Granger-Causality-T
ests-tp3332968p3332968.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Garchoxfit package

2011-03-28 Thread Pfaff, Bernhard Dr.
Dear Ning,

are you referring to the deprecated function garchOxFit() of the package 
fGarch, formerly contained in fSeries? If so:

library(sos)
findFn("garchOxFit")

which yields:

http://finzi.psych.upenn.edu/R/library/fGarch/html/00fGarch-package.html

And there you will find at the bottom of the page:

OX Interface

NOTE: garchOxFit is no longer part of fGarch package. If you are interested to 
use, please contact us. 

contains a Windows interface to OX. 

The function garchOxFit interfaces a subset of the functionality of the G@ARCH 
4.0 Package written in Ox. G@RCH 4.0 is one of the most sophisticated packages 
for modelling univariate GARCH processes including GARCH, EGARCH, GJR, APARCH, 
IGARCH, FIGARCH, FIEGARCH, FIAPARCH and HYGARCH models. Parameters can be 
estimated by approximate (Quasi-) maximum likelihood methods under four 
assumptions: normal, Student-t, GED or skewed Student-t errors. 

Furthermore, do you have Ox installed on your PC?

http://www.doornik.com/products.html


Best,
Bernhard


> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von Ning Cheng
> Gesendet: Sonntag, 27. März 2011 05:16
> An: r-help@r-project.org
> Betreff: [R] Garchoxfit package
> 
> Dear List,
> I'm now using Ubuntu 10.10 and I want to use the garchoxfit 
> function.It seems that I need to download the package.
> 
> While after installing the package,I still can't use the 
> garchoxfit function.What's the reason and how to fix that?
> 
> Thanks for your time!
> 
> Best,
> Ning
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VECM with UNRESTRICTED TREND

2011-03-30 Thread Pfaff, Bernhard Dr.
Hello Greg,

you can exploit the argument 'dumvar' for this. See ?ca.jo

Best,
Bernhard 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von Grzegorz Konat
> Gesendet: Mittwoch, 30. März 2011 16:46
> An: r-help@r-project.org
> Betreff: [R] VECM with UNRESTRICTED TREND
> 
> Dear All,
> 
> My question is:
> 
> how can I estimate VECM system with "unrestricted trend" (aka 
> "case 5") option as a deterministic term?
> 
> As far as I know, ca.jo in urca package allows for "restricted trend"
> only [vecm
> <- ca.jo(data, type = "trace"/"eigen", ecdet = "trend", K = 
> n, spec = "transitory"/"longrun")].
> Obviously, I don't have to do this in urca, so if another 
> package gives the possibility, please let me know too!
> 
> Thanks in advance!
> 
> Greg
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VECM with UNRESTRICTED TREND

2011-03-31 Thread Pfaff, Bernhard Dr.
Hello Greg,
 
you include your trend as a (Nx1) matrix and use this for 'dumvar'. The matrix 
'dumvar' is just added to the VECM as deterministic regressors and while you 
are referring to case 5, this is basically what you are after, if I am not 
mistaken. But we aware that this implies a quadratic trend for the levels.
 
Best,
Bernhard




Von: Grzegorz Konat [mailto:grzegorz.ko...@ibrkk.pl] 
Gesendet: Mittwoch, 30. März 2011 20:50
    An: Pfaff, Bernhard Dr.; r-help@r-project.org
Betreff: Re: [R] VECM with UNRESTRICTED TREND


Hello Bernhard, 

Thank You very much. Unfortunately I'm still not really sure how should 
I use dummy vars in this context...
If I have a system of three variables (x, y, z), lag order = 2 and 1 
cointegrating relation, what should I do? I mean, what kind of 'pattern' should 
be used to create those dummy variables, what should they represent and how 
many of them do I need?

Many thanks in advance!

Best,
Greg
    
    
    2011/3/30 Pfaff, Bernhard Dr. 


Hello Greg,

you can exploit the argument 'dumvar' for this. See ?ca.jo

Best,
Bernhard

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] Im Auftrag von Grzegorz 
Konat
> Gesendet: Mittwoch, 30. März 2011 16:46
> An: r-help@r-project.org
> Betreff: [R] VECM with UNRESTRICTED TREND

>
> Dear All,
>
> My question is:
>
> how can I estimate VECM system with "unrestricted trend" (aka
> "case 5") option as a deterministic term?
>
> As far as I know, ca.jo in urca package allows for 
"restricted trend"
> only [vecm
> <- ca.jo(data, type = "trace"/"eigen", ecdet = "trend", K =
> n, spec = "transitory"/"longrun")].
> Obviously, I don't have to do this in urca, so if another
> package gives the possibility, please let me know too!
>
> Thanks in advance!
>
> Greg
>

>   [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible 
code.
>


*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, 
dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.

*





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VECM with UNRESTRICTED TREND

2011-03-31 Thread Pfaff, Bernhard Dr.

 

Hello Bernhard, 


thank You so much one again! Now I (more or less) understand the idea, 
but still have problem with its practical application.


I do (somewhat following example 8.1 in your textbook):


library(urca)
data(my.data)
names(my.data)
attach(my.data)
dat1 <- my.data[, c("dY", "X", "dM")]
dat2 <- cbind(time)
 
What is 'time'? Just employ matrix(seq(1:nrow(dat1)), ncol = 1) for 
creating the trend variable.
 
Best,
Bernhard
 
 
 args('ca.jo')
yxm.vecm <- ca.jo(dat1, type = "trace", ecdet = "trend", K = 2, spec = 
"longrun", dumvar=dat2)

The above code produces following output:

Error in r[i1, , drop = FALSE] - r[-nrow(r):-(nrow(r) - lag + 1L), , 
drop = FALSE] : 
  non-numeric argument to binary operator

What does that mean? Should I use cbind command to dat1 as well? And 
doesn't it transform the series into series of integer numbers?

Thank you once again (especially for your patience).

Best,
Greg



2011/3/31 Pfaff, Bernhard Dr. 


Hello Greg,
 
you include your trend as a (Nx1) matrix and use this for 
'dumvar'. The matrix 'dumvar' is just added to the VECM as deterministic 
regressors and while you are referring to case 5, this is basically what you 
are after, if I am not mistaken. But we aware that this implies a quadratic 
trend for the levels.
 
Best,
Bernhard




Von: Grzegorz Konat [mailto:grzegorz.ko...@ibrkk.pl] 
Gesendet: Mittwoch, 30. März 2011 20:50
An: Pfaff, Bernhard Dr.; r-help@r-project.org
Betreff: Re: [R] VECM with UNRESTRICTED TREND


Hello Bernhard, 

Thank You very much. Unfortunately I'm still not really 
sure how should I use dummy vars in this context...
If I have a system of three variables (x, y, z), lag 
order = 2 and 1 cointegrating relation, what should I do? I mean, what kind of 
'pattern' should be used to create those dummy variables, what should they 
represent and how many of them do I need?

Many thanks in advance!

        Best,
    Greg


2011/3/30 Pfaff, Bernhard Dr. 



Hello Greg,

you can exploit the argument 'dumvar' for this. 
See ?ca.jo

Best,
Bernhard

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] Im 
Auftrag von Grzegorz Konat
> Gesendet: Mittwoch, 30. März 2011 16:46
> An: r-help@r-project.org
> Betreff: [R] VECM with UNRESTRICTED TREND

>
> Dear All,
>
> My question is:
>
> how can I estimate VECM system with 
"unrestricted trend" (aka
> "case 5") option as a deterministic term?
>
> As far as I know, ca.jo in urca package 
allows for "restricted trend"
> only [vecm
> <- ca.jo(data, type = "trace"/"eigen", ecdet 
= "trend", K =
> n, spec = "transitory"/"longrun")].
> Obviously, I don't have to do this in urca, 
so if another
> package gives the possibility, please let me 
know too!
>
> Thanks in advance!
>
> Greg
>

>

Re: [R] VECM with UNRESTRICTED TREND

2011-03-31 Thread Pfaff, Bernhard Dr.
Well, without further information, I do not know, but try the following
 
library(urca)
example(ca.jo)
trend <- matrix(1:nrow(sjf), ncol = 1)
colnames(trend) <- "trd"
ca.jo(sjf, type = "trace", ecdet = "const", K = 2, spec = "longrun", dumvar = 
trend)
 
Best,
Bernhard
 
 




Von: Grzegorz Konat [mailto:grzegorz.ko...@ibrkk.pl] 
Gesendet: Donnerstag, 31. März 2011 14:40
An: Pfaff, Bernhard Dr.; r-help@r-project.org
Betreff: Re: [R] VECM with UNRESTRICTED TREND


'time' was a trend variable from my.data set. Equivalent to the output 
of the command 'matrix' you just gave me. 


So now I did:


library(urca)
data(my.data)
names(my.data)
attach(my.data)
dat1 <- my.data[, c("dY", "X", "dM")]
mat1 <- matrix(seq(1:nrow(dat1)), ncol = 1)
args('ca.jo')
yxm.vecm <- ca.jo(dat1, type = "trace", ecdet = "const", K = 2, spec = 
"longrun", dumvar=mat1)

and the output is:

Error in r[i1, , drop = FALSE] - r[-nrow(r):-(nrow(r) - lag + 1L), , 
drop = FALSE] : 
  non-numeric argument to binary operator
In addition: Warning message:
In ca.jo(dat1, type = "trace", ecdet = "const", K = 2, spec = 
"longrun",  :
No column names in 'dumvar', using prefix 'exo' instead.

What do I do wrong?

Best,
Greg


2011/3/31 Pfaff, Bernhard Dr. 



 

Hello Bernhard, 


thank You so much one again! Now I (more or less) 
understand the idea, but still have problem with its practical application.


I do (somewhat following example 8.1 in your textbook):


library(urca)
data(my.data)
names(my.data)
attach(my.data)
dat1 <- my.data[, c("dY", "X", "dM")]
dat2 <- cbind(time)
 
What is 'time'? Just employ matrix(seq(1:nrow(dat1)), 
ncol = 1) for creating the trend variable.
 
Best,
Bernhard
 
 
 args('ca.jo')
yxm.vecm <- ca.jo(dat1, type = "trace", ecdet = 
"trend", K = 2, spec = "longrun", dumvar=dat2)

The above code produces following output:

Error in r[i1, , drop = FALSE] - r[-nrow(r):-(nrow(r) - 
lag + 1L), , drop = FALSE] : 
  non-numeric argument to binary operator

What does that mean? Should I use cbind command to dat1 
as well? And doesn't it transform the series into series of integer numbers?

Thank you once again (especially for your patience).

Best,
Greg



2011/3/31 Pfaff, Bernhard Dr. 



Hello Greg,
 
you include your trend as a (Nx1) matrix and 
use this for 'dumvar'. The matrix 'dumvar' is just added to the VECM as 
deterministic regressors and while you are referring to case 5, this is 
basically what you are after, if I am not mistaken. But we aware that this 
implies a quadratic trend for the levels.
 
    Best,
        Bernhard




Von: Grzegorz Konat 
[mailto:grzegorz.ko...@ibrkk.pl] 
Gesendet: Mittwoch, 30. März 2011 20:50
An: Pfaff, Bernhard Dr.; 
r-help@r-project.org
Betreff: Re: [R] VECM with UNRESTRICTED 
TREND


Hello Bernhard, 

Thank You very much. Unfortunately I'm 
still not really sure how should I use dummy vars in this context...
If I have a system of three variables 
(x, y, z), lag order = 2 and 1 cointegrating relation, 

[R] TV VECM (formerly: VECM with UNRESTRICTED TREND)

2011-04-01 Thread Pfaff, Bernhard Dr.
Dear Renoir,

are you referring to:

http://econ.la.psu.edu/~hbierens/TVCOINT.PDF 

?
If so, no, but you could up this framework fairly easily and hereby employ the 
functions of urca. But this should already be evident from the package's 
manual. 

Best,
Bernhard 

> -Ursprüngliche Nachricht-
> Von: renoir vieira [mailto:renoirvie...@gmail.com] 
> Gesendet: Donnerstag, 31. März 2011 22:27
> An: Grzegorz Konat
> Cc: Pfaff, Bernhard Dr.; r-help@r-project.org
> Betreff: Re: [R] VECM with UNRESTRICTED TREND
> 
> Dear Pfaff,
> 
> Would that be possible to fit a Time varying VECM using urca?
> 
> Yours,
> Renoir
> 
> On Thursday, March 31, 2011, Grzegorz Konat 
>  wrote:
> > The code you gave me works fine with Finland, but the same 
> for my data 
> > - does not!
> > I do:
> >
> > library(urca)
> > data(my.data)
> > dat1 <- my.data[, c("dY", "X", "dM")]
> > trend <- matrix(1:nrow(dat1), ncol = 1)
> > colnames(trend) <- "trd"
> > yxm.vecm <- ca.jo(dat1, type = "trace", ecdet = "const", K 
> = 2, spec = 
> > "longrun", dumvar = trend)
> >
> > and the result is again:
> >
> > Error in r[i1, , drop = FALSE] - r[-nrow(r):-(nrow(r) - lag 
> + 1L), , 
> > drop = FALSE] :
> >   non-numeric argument to binary operator
> >
> > I attach my dataset in xls format. If you have 5 minutes 
> and wish to 
> > check it out, I'd be extremely grateful!
> >
> > Best,
> > Greg
> >
> >
> >
> > 2011/3/31 Pfaff, Bernhard Dr. 
> >
> >>  Well, without further information, I do not know, but try the 
> >> following
> >>
> >> library(urca)
> >> example(ca.jo)
> >> trend <- matrix(1:nrow(sjf), ncol = 1)
> >> colnames(trend) <- "trd"
> >> ca.jo(sjf, type = "trace", ecdet = "const", K = 2, spec = 
> "longrun", 
> >> dumvar = trend)
> >>
> >> Best,
> >> Bernhard
> >>
> >>
> >>
> >>  --
> >> *Von:* Grzegorz Konat [mailto:grzegorz.ko...@ibrkk.pl]
> >> *Gesendet:* Donnerstag, 31. März 2011 14:40
> >>
> >> *An:* Pfaff, Bernhard Dr.; r-help@r-project.org
> >> *Betreff:* Re: [R] VECM with UNRESTRICTED TREND
> >>
> >> 'time' was a trend variable from my.data set. Equivalent to the 
> >> output of the command 'matrix' you just gave me.
> >>
> >> So now I did:
> >>
> >>  library(urca)
> >> data(my.data)
> >> names(my.data)
> >> attach(my.data)
> >> dat1 <- my.data[, c("dY", "X", "dM")]
> >> mat1 <- matrix(seq(1:nrow(dat1)), ncol = 1)
> >> args('ca.jo')
> >> yxm.vecm <- ca.jo(dat1, type = "trace", ecdet = "const", K 
> = 2, spec 
> >> = "longrun", dumvar=mat1)
> >>
> >> and the output is:
> >>
> >>  Error in r[i1, , drop = FALSE] - r[-nrow(r):-(nrow(r) - 
> lag + 1L), , 
> >> drop = FALSE] :
> >>   non-numeric argument to binary operator In addition: Warning 
> >> message:
> >> In ca.jo(dat1, type = "trace", ecdet = "const", K = 2, spec = 
> >> "longrun",
> >>  :
> >> No column names in 'dumvar', using prefix 'exo' instead.
> >>
> >> What do I do wrong?
> >>
> >> Best,
> >> Greg
> >>
> >>
> >> 2011/3/31 Pfaff, Bernhard Dr. 
> >>
> >>>
> >>>
> >>>
> >>>  Hello Bernhard,
> >>>
> >>> thank You so much one again! Now I (more or less) understand the 
> >>> idea, but still have problem with its practical application.
> >>>
> >>> I do (somewhat following example 8.1 in your textbook):
> >>>
> >>>  library(urca)
> >>> data(my.data)
> >>> names(my.data)
> >>> attach(my.data)
> >>> dat1 <- my.data[, c("dY", "X", "dM")]
> >>> dat2 <- cbind(time)
> >>>
> >>> What is 'time'? Just employ matrix(seq(1:nrow(dat1)), 
> ncol = 1) for 
> >>> creating the trend variable.
> >>>
> >>> Best,
> >>> Bernhard
> >>>
> >>>
> >>>  args('ca.jo')
> >>> yxm.vecm &

Re: [R] Granger Causality in a VAR Model

2011-04-05 Thread Pfaff, Bernhard Dr.
The below email was cross-posted to R-Sig-Finance and has been answered there.  

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von ivan
> Gesendet: Montag, 4. April 2011 20:24
> An: r-help@r-project.org
> Betreff: [R] Granger Causality in a VAR Model
> 
> Dear Community,
> 
> I am new to R and have a question concerning the causality () 
> test in the vars package. I need to test whether, say, the 
> variable y Granger causes the variable x, given z as a 
> control variable.
> 
> I estimated the VAR model as follows: >model<-VAR(cbind(x,y,z),p=2)
> 
> Then I did the following: >causality(model, cause="y"). I 
> thing this tests the Granger causality of y on the vector 
> (x,z), though. How can I implement the test for y causing x 
> controlled for z? Thus, the F-test comparing the two models 
> M1:x~lagged(x)+lagged(z) and M2:x~lagged(x)+lagged(y)+lagged(z)?
> 
> Thank you in advance.
> 
> Best Regards
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Automatically extract info from Granger causality output

2011-04-15 Thread Pfaff, Bernhard Dr.
Dear Ivan,

first, it would pay-off in terms of readability to employ line breaks and 
second to provide a reproducable code snippet and third which package you have 
used. Now to your questions:
1) What happens if you provide colnames for your objects?
2) What happens if you omit the $ after count?

Best,
Bernhard

ps: the function seems to have been ported from the package 'vars'. In this 
package the function causality() is included which returns a named list with 
elements of class htest.  

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von ivan
> Gesendet: Donnerstag, 14. April 2011 19:37
> An: r-help@r-project.org
> Betreff: [R] Automatically extract info from Granger causality output
> 
> Dear Community,
> 
> this is my first programming in R and I am stuck with a 
> problem. I have the following code which automatically 
> calculates Granger causalities from a variable, say e.g. "bs" 
> as below, to all other variables in the data frame:
> 
> log.returns<-as.data.frame( lapply(daten, function(x) 
> diff(log(ts(x) y1<-log.returns$bs
> y2<- log.returns[,!(names(log.returns) %in% "bs")]
> Granger<- function(y1,y2) {models=lapply(y2, function(x)
> VAR(cbind(x,y1),ic="SC") ); results=lapply(models,function(x) 
> causality(x,cause="y1")); print(results)}
> Count<-Granger(y1,y2)
> 
> which produces the following output (I have printed only part 
> of it (for Granger causality of bs on ml)):
> 
> $ml
> $ml$Granger
> 
> Granger causality H0: y1 do not Granger-cause x
> 
> data:  VAR object x
> F-Test = 0.2772, df1 = 1, df2 = 122, p-value = 0.5995
> 
> 
> $ml$Instant
> 
> H0: No instantaneous causality between: y1 and x
> 
> data:  VAR object x
> Chi-squared = 19.7429, df = 1, p-value = 8.859e-06
> 
> My questions:
> 
> 1)How can I edit the function above so that the output writes: Granger
> causality H0: bs do not Granger-cause ml   rather than  Granger
> causality H0: y1 do not Granger-cause x?
> 
> 2) I want to extract the p-values of the tests into a data 
> frame for instance. The problem is that the output has a 3 
> layer structure.
> Thus, for the above p-value I need to write count$ml$Granger$p.value.
> I thought of a loop of something like for(i in 
> 1:length(count)) {z=count$[[i]]$Granger$p.value} but it didn't work.
> 
> Thank you very much for your help.
> 
> Best Regards.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Automatically extract info from Granger causality output

2011-04-15 Thread Pfaff, Bernhard Dr.
Hello Ivan,

see example(causality) for the first question and use count[[i]] and not 
count$[[i]] for the second. The following works for me:

example(causality)
test1 <- causality(var.2c, "e")
test2 <- causality(var.2c, "prod")
tl <- list(test1, test2)
res <- matrix(NA, ncol = 1, nrow = length(tl))
for(i in 1:length(tl)){
res[i, ] <- tl[[i]]$Granger$p.value
}
res

hth,
Bernhard


> -Ursprüngliche Nachricht-
> Von: ivan [mailto:i.pet...@gmail.com] 
> Gesendet: Freitag, 15. April 2011 10:46
> An: Pfaff, Bernhard Dr.
> Cc: r-help@r-project.org
> Betreff: Re: [R] Automatically extract info from Granger 
> causality output
> 
> Dear Bernhard,
> 
> thank you very much for the response. Yes, I am using the 
> packsges "vars" with fuchrions VAR() and causality().
> 
> 1)Giving colnames to the objects does unfortunately not  
> change anything.
> 
> 2) I am not sure if I understood you right. Did you mean to 
> insert Countml$Granger$p.value rather than  
> Count$ml$Granger$p.value? This
> returns: Error: Object 'Countms' not found.
> 
> By the way, str(Count) produces the following:
> 
> > str(Count)
> List of 4
>  $ ml:List of 2
>   ..$ Granger:List of 5
>   .. ..$ statistic: num [1, 1] 0.277
>   .. .. ..- attr(*, "names")= chr "F-Test"
>   .. ..$ parameter: Named num [1:2] 1 122
>   .. .. ..- attr(*, "names")= chr [1:2] "df1" "df2"
>   .. ..$ p.value  : num [1, 1] 0.6
>   .. ..$ method   : chr "Granger causality H0: y1 do not 
> Granger-cause x"
>   .. ..$ data.name: chr "VAR object x"
>   .. ..- attr(*, "class")= chr "htest"
>   ..$ Instant:List of 5
>   .. ..$ statistic: num [1, 1] 19.7
>   .. .. ..- attr(*, "names")= chr "Chi-squared"
>   .. ..$ parameter: Named int 1
>   .. .. ..- attr(*, "names")= chr "df"
>   .. ..$ p.value  : num [1, 1] 8.86e-06
>   .. ..$ method   : chr "H0: No instantaneous causality 
> between: y1 and x"
>   .. ..$ data.name: chr "VAR object x"
>   .. ..- attr(*, "class")= chr "htest"
>  $ jp:List of 2
> .
> .
> .
> .
> 
> Best Regards,
> 
> Ivan
> 
> On Fri, Apr 15, 2011 at 10:13 AM, Pfaff, Bernhard Dr.
>  wrote:
> > Dear Ivan,
> >
> > first, it would pay-off in terms of readability to employ 
> line breaks and second to provide a reproducable code snippet 
> and third which package you have used. Now to your questions:
> > 1) What happens if you provide colnames for your objects?
> > 2) What happens if you omit the $ after count?
> >
> > Best,
> > Bernhard
> >
> > ps: the function seems to have been ported from the package 
> 'vars'. In this package the function causality() is included 
> which returns a named list with elements of class htest.
> >
> >> -Ursprüngliche Nachricht-
> >> Von: r-help-boun...@r-project.org
> >> [mailto:r-help-boun...@r-project.org] Im Auftrag von ivan
> >> Gesendet: Donnerstag, 14. April 2011 19:37
> >> An: r-help@r-project.org
> >> Betreff: [R] Automatically extract info from Granger 
> causality output
> >>
> >> Dear Community,
> >>
> >> this is my first programming in R and I am stuck with a problem. I 
> >> have the following code which automatically calculates Granger 
> >> causalities from a variable, say e.g. "bs"
> >> as below, to all other variables in the data frame:
> >>
> >> log.returns<-as.data.frame( lapply(daten, function(x)
> >> diff(log(ts(x) y1<-log.returns$bs
> >> y2<- log.returns[,!(names(log.returns) %in% "bs")]
> >> Granger<- function(y1,y2) {models=lapply(y2, function(x)
> >> VAR(cbind(x,y1),ic="SC") ); results=lapply(models,function(x) 
> >> causality(x,cause="y1")); print(results)}
> >> Count<-Granger(y1,y2)
> >>
> >> which produces the following output (I have printed only 
> part of it 
> >> (for Granger causality of bs on ml)):
> >>
> >> $ml
> >> $ml$Granger
> >>
> >>         Granger causality H0: y1 do not Granger-cause x
> >>
> >> data:  VAR object x
> >> F-Test = 0.2772, df1 = 1, df2 = 122, p-value = 0.5995
> >>
> >>
> >> $ml$Instant
> >>
> >>         H0: No instantaneous causality between: y1 and x
> >>
> >> data:  VAR object x
> >> Chi-squared = 19.7429, df = 1, p-value = 8.859e-06
> >>
&

Re: [R] SVAR Restriction on AB-model

2012-07-13 Thread Pfaff, Bernhard Dr.
Hello Veronica,

what makes you think that this is an error? It is a warning that your specified 
SVAR-model is **just** identified and hence an over-identification test cannot 
be conducted. You can suppress this warning by not asking for an 
over-identification in the first place, by setting lrtest = FALSE in your call 
to SVAR(). See ?SVAR (Arguments and Details sections) and the package's 
vignette.
To your second question, provide zero entries in the respective column of the 
A-matrix except for the main-diagonal element. 

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von vero_acurio
Gesendet: Donnerstag, 12. Juli 2012 16:10
An: r-help@r-project.org
Betreff: [R] SVAR Restriction on AB-model

Hello!

I'm doing a svar and when I make the estimation the next error message
appears:

In SVAR(x, Amat = amat, Bmat = bmat, start = NULL, max.iter = 1000,  :
  The AB-model is just identified. No test possible.

Could you help me to interpret it please.

Also I have the identification assumption that one of my shocks is exogenous 
relative to the contemporaneous values of the other variables in the SVAR, 
could you help me with the construction of the restriction matrices A and B of 
the SVAR model please?

Thanks a lot!

Best Regards,

Veronica

--
View this message in context: 
http://r.789695.n4.nabble.com/SVAR-Restriction-on-AB-model-tp4636306.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] VAR with excluded lags

2011-06-24 Thread Pfaff, Bernhard Dr.
?restrict 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von gizmo
> Gesendet: Mittwoch, 22. Juni 2011 18:26
> An: r-help@r-project.org
> Betreff: [R] VAR with excluded lags
> 
> Hi,
> 
> I would like to fit a Vector Auto Regression with lags that 
> are not consecutive with the vars package (or other if there 
> is one as good). Is it possible?
> 
> For example, rather than having lags 1, 2, 3, 4, 5 have 1, 2, 5.
> 
> Thanks. 
> 
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/VAR-with-excluded-lags-tp3617485
p3617485.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BY GROUP in evir R package

2011-07-06 Thread Pfaff, Bernhard Dr.
Hello Peter,

str(rg2)

us quite revealing for this; by() returns a list and hence lapply() can be 
employed, e.g.:
lapply(rg2, rlevel.gev, k.blocks = 5)

By the same token, you can extract the relevant bits and pieces and put them 
together in a data.frame.

Best,
Bernhard 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von Peter Maclean
> Gesendet: Mittwoch, 6. Juli 2011 09:25
> An: Dr. Bernhard Pfaff
> Cc: r-help@r-project.org
> Betreff: Re: [R] BY GROUP in evir R package
> 
> Dr. Pfaff: 
> How do we pass the "by" results to "rlevel.gev" function to 
> get the return level and also save the results (both 
> rg2(par.ests and $par.ses) and rl) as.data.frame?
> 
> #Grouped vector
> Gdata <- data.frame(n = rep(c(1,2,3), each = 100), y = rnorm(300))
> library(evir)
> require(plyr)
> 
> #Model for Grouped
> rg2<- by(Gdata,Gdata[,"n"], function(x) gev(x$y, 5, method = 
> "BFGS", control =list(maxit = 500))) # rl <- rlevel.gev(rg2, 
> k.blocks = 5, add = TRUE)
>  
> 
> 
> 
> - Original Message 
> From: Dr. Bernhard Pfaff 
> To: Peter Maclean 
> Sent: Fri, June 3, 2011 2:45:28 PM
> Subject: Re: BY GROUP in evir R package
> 
> Hello Peter,
> 
> many thanks for your email. Well, as you might have guessed, 
> there is also a 
> function by() in R that does the same job. See help("by") for 
> more information.
> 
> Best,
> Bernhard
> 
> Peter Maclean schrieb:
> > Hi,
> > I am new in R and I want to use your package for data 
> analysis. I usually use 
> >SAS. I have rainfall data for different points. Each point 
> has 120 observations. 
> >The rainfall data is in the first column (RAIN) and the 
> categorical variable 
> >that group the data is in the second column (GROUP). The 
> data frame is 
> >rain.data. How can I use the gev function to estimate all 
> three parameters by 
> >GROUP variable group? In SAS there is a by() function that 
> estimate the model by 
> >group. However, I would like to move to R.
> >  With thanks, 
> >  Peter Maclean
> > Department of Economics
> > University of Dar -es- Salaam, Tanzania 
> > 
> > 
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BY GROUP in evir R package

2011-07-07 Thread Pfaff, Bernhard Dr.
lapply(rg2, function(x) x$par.ests) 

there is no slot residuals! The function gev() does return a S3-object with 
class attribute 'gev', see ?gev. 

> 
> Dr. Pfaff:
> 
> After using str; can you give an example on data extration 
> (e.g. for $par.ests and @residuals)
> 
> 
> 
> - Original Message 
> From: "Pfaff, Bernhard Dr." 
> To: Peter Maclean ; Dr. Bernhard Pfaff 
> 
> Cc: "r-help@r-project.org" 
> Sent: Wed, July 6, 2011 8:17:12 AM
> Subject: AW: [R] BY GROUP in evir R package
> 
> Hello Peter,
> 
> str(rg2)
> 
> us quite revealing for this; by() returns a list and hence 
> lapply() can be 
> employed, e.g.:
> lapply(rg2, rlevel.gev, k.blocks = 5)
> 
> By the same token, you can extract the relevant bits and 
> pieces and put them 
> together in a data.frame.
> 
> Best,
> Bernhard 
> 
> > -Ursprüngliche Nachricht-
> > Von: r-help-boun...@r-project.org 
> > [mailto:r-help-boun...@r-project.org] Im Auftrag von Peter Maclean
> > Gesendet: Mittwoch, 6. Juli 2011 09:25
> > An: Dr. Bernhard Pfaff
> > Cc: r-help@r-project.org
> > Betreff: Re: [R] BY GROUP in evir R package
> > 
> > Dr. Pfaff: 
> > How do we pass the "by" results to "rlevel.gev" function to 
> > get the return level and also save the results (both 
> > rg2(par.ests and $par.ses) and rl) as.data.frame?
> > 
> > #Grouped vector
> > Gdata <- data.frame(n = rep(c(1,2,3), each = 100), y = rnorm(300))
> > library(evir)
> > require(plyr)
> > 
> > #Model for Grouped
> > rg2<- by(Gdata,Gdata[,"n"], function(x) gev(x$y, 5, method = 
> > "BFGS", control =list(maxit = 500))) # rl <- rlevel.gev(rg2, 
> > k.blocks = 5, add = TRUE)
> >  
> > 
> > 
> > 
> > - Original Message 
> > From: Dr. Bernhard Pfaff 
> > To: Peter Maclean 
> > Sent: Fri, June 3, 2011 2:45:28 PM
> > Subject: Re: BY GROUP in evir R package
> > 
> > Hello Peter,
> > 
> > many thanks for your email. Well, as you might have guessed, 
> > there is also a 
> > function by() in R that does the same job. See help("by") for 
> > more information.
> > 
> > Best,
> > Bernhard
> > 
> > Peter Maclean schrieb:
> > > Hi,
> > > I am new in R and I want to use your package for data 
> > analysis. I usually use 
> > >SAS. I have rainfall data for different points. Each point 
> > has 120 observations. 
> > >The rainfall data is in the first column (RAIN) and the 
> > categorical variable 
> > >that group the data is in the second column (GROUP). The 
> > data frame is 
> > >rain.data. How can I use the gev function to estimate all 
> > three parameters by 
> > >GROUP variable group? In SAS there is a by() function that 
> > estimate the model by 
> > >group. However, I would like to move to R.
> > >  With thanks, 
> > >  Peter Maclean
> > > Department of Economics
> > > University of Dar -es- Salaam, Tanzania 
> > > 
> > > 
> > 
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide 
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> > 
> *
> Confidentiality Note: The information contained in this message,
> and any attachments, may contain confidential and/or privileged
> material. It is intended solely for the person(s) or entity to
> which it is addressed. Any review, retransmission, dissemination,
> or taking of any action in reliance upon this information by
> persons or entities other than the intended recipient(s) is
> prohibited. If you received this in error, please contact the
> sender and delete the material from any computer.
> *
> 
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using Windows 7 Task Scheduler with R source scripts

2011-07-08 Thread Pfaff, Bernhard Dr.
Hello Dan,

I reckon that you need to path a batch-file to the scheduler, i.e. something 
along the lines 

R CMD BATCH script.R

shall be included in, say, 'RBatchjob.bat' and this file shall then be called 
by the task scheduler. 

Best,
Bernhard
 

> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Im Auftrag von 
> daniel.e...@barclayswealth.com
> Gesendet: Freitag, 8. Juli 2011 16:33
> An: r-help@r-project.org
> Betreff: [R] Using Windows 7 Task Scheduler with R source scripts
> 
> Hello all,
> 
> I'm trying to get a specific source file to run at a certain 
> time each day with WindowsScheduler 
> http://windows.microsoft.com/en-US/windows7/schedule-a-task
> 
> I've tried a number of methods, none of which work:
> My best guess was:
> 1. Associate the script.R file with R in FileTypes.
> 2. Call the script.R file in the scheduler
> 
> This definitely opens R, but the source file doesn't execute.
> 
> Any ideas?
> 
> Much thanks,
> Dan
> 
> 
> 
> 
> Barclays Wealth is the wealth management division of Barclays 
> Bank PLC. This email may relate to or be sent from other 
> members of the Barclays Group.
> 
> The availability of products and services may be limited by 
> the applicable laws and regulations in certain jurisdictions. 
> The Barclays Group does not normally accept or offer business 
> instructions via internet email. Any action that you might 
> take upon this message might be at your own risk.
> 
> This email and any attachments are confidential and ...{{dropped:21}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] What is the CADF test criterion="BIC" report?

2011-11-14 Thread Pfaff, Bernhard Dr.
Hello Paul,

just a guess: different sample sizes! In your first call, the sample is shorter 
than in your second. Hence, you can test this, if you curtail your data set in 
your second call and then you should obtain the same result, i.e.:

> library(vars)
> data(Canada)
> test <- summary(CADFtest(Canada[-c(1:13), 1], max.lag.y = 1))
> test
Augmented DF test
ADF test
t-test statistic:  -1.389086
p-value:0.855681
Max lag of the diff. dependent variable:1.00

Call:
dynlm(formula = formula(model), start = obs.1, end = obs.T)

Residuals:
 Min   1Q   Median   3Q  Max
-0.79726 -0.20587 -0.03332  0.23840  0.70460

Coefficients:
 Estimate Std. Error t value Pr(>|t|)
(Intercept) 24.471789  17.521147   1.3970.167
trnd 0.009959   0.006941   1.4350.156
L(y, 1) -0.026068   0.018767  -1.3890.856
L(d(y), 1)   0.615983   0.092632   6.650 7.18e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.3533 on 65 degrees of freedom
Multiple R-squared: 0.413,  Adjusted R-squared: 0.3859
F-statistic:NA on NA and NA DF,  p-value: NA

Though, I am not the package maintainer who could provide you with more 
insights, but the source code itself.

Best,
Bernhard



-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von p99323...@ntu.edu.tw
Gesendet: Montag, 14. November 2011 04:35
An: r-help@r-project.org
Betreff: [R] What is the CADF test criterion="BIC" report?

Hello:
   I am a rookie in using R. When I used the unit root test in "CADFtest", I 
got the different t-test statistics between using criterion="BIC" and no using 
criterion. But when I checked the result with eviews, I find out that no using 
criterion is correct. Why after using criterion="BIC", I got the different 
result?


Paul


> data(Canada)

> ADFt <- CADFtest(Canada[,1], max.lag.y = 14, criterion="BIC")

> summary(ADFt)
Augmented DF test
 ADF test
t-test statistic:  -1.389086
p-value:0.855681
Max lag of the diff. dependent variable:1.00

Call:
dynlm(formula = formula(model), start = obs.1, end = obs.T)

Residuals:
  Min   1Q   Median   3Q  Max
-0.79726 -0.20587 -0.03332  0.23840  0.70460

Coefficients:
  Estimate Std. Error t value Pr(>|t|)
(Intercept) 24.342321  17.435476   1.3960.167
trnd 0.009959   0.006941   1.4350.156
L(y, 1) -0.026068   0.018767  -1.3890.856
L(d(y), 1)   0.615983   0.092632   6.650 7.18e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.3533 on 65 degrees of freedom
Multiple R-squared: 0.413,  Adjusted R-squared: 0.3859
F-statistic:NA on NA and NA DF,  p-value: NA

> ADFt1 <- CADFtest(Canada[,1], max.lag.y =1)

> summary(ADFt1)
Augmented DF test
  ADF test
t-test statistic:  -2.7285715
p-value:0.2282588
Max lag of the diff. dependent variable:1.000

Call:
dynlm(formula = formula(model), start = obs.1, end = obs.T)

Residuals:
  Min   1Q   Median   3Q  Max
-0.84769 -0.24745 -0.02081  0.24187  0.82344

Coefficients:
  Estimate Std. Error t value Pr(>|t|)
(Intercept) 47.661910  17.439021   2.733  0.00776 **
trnd 0.019217   0.007005   2.743  0.00754 **
L(y, 1) -0.051256   0.018785  -2.729  0.22826
L(d(y), 1)   0.753011   0.075724   9.944 1.61e-15 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.3937 on 78 degrees of freedom
Multiple R-squared: 0.5674, Adjusted R-squared: 0.5508
F-statistic:NA on NA and NA DF,  p-value: NA

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Copula Fitting Using R

2011-11-25 Thread Pfaff, Bernhard Dr.
Hello Denis & Fayyad,

in principal the advice given is appropriate, but QRMlib has been removed from 
CRAN lately, due to a glitch with its dependencies and the current version of 
R. Hence, to get the package installed and does not want to wait until it shows 
up on CRAN, one should to the following in the intermediate time:

1) Grab an old release of QRMlib from the CRAN archive.
2) Obtain the packages fSeries and fCalendar from R-Forge
3) Install 2) and 1) in that order.

Best,
Bernhard

ps: The package maintainer of QRMlib is aware of the problem and hopefully, the 
package QRMlib is fixed quite soon.  

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von Dennis Murphy
Gesendet: Freitag, 25. November 2011 09:10
An: cahaya iman
Cc: r-help@r-project.org
Betreff: Re: [R] Copula Fitting Using R

Hi:

This is the type of question for which the sos package can come to the rescue:

library('sos')
findFn('Gumbel Clayton copula')

It appears that the QRMlib package would be a reasonable place to start.

Dennis

On Thu, Nov 24, 2011 at 7:29 PM, cahaya iman  wrote:
> Hi,
>
> Is anybody using Copula package for fitting copulas to own data?
> I have two marginals Log Normal with (parameters 1.17 and 0.76) and Gamma (
> 2.7 and 1.05)
>
> Which package I should use to fit Gumbel and Clayton Copulas?
>
> Thanks,
> fayyad
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plot method for ca.jo

2012-03-20 Thread Pfaff, Bernhard Dr.
?getMethod
getMethod("plot", c("ca.jo", "missing"))

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von Keith Weintraub
Gesendet: Dienstag, 20. März 2012 16:36
An: r-help@r-project.org
Betreff: [R] Plot method for ca.jo

Folks,
  How would I find the code for a plot function that is in a package?

I want to understand exactly what is being plotted.

Thanks,
KW

--


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A question on Unit Root Test using "urca" toolbox

2012-02-03 Thread Pfaff, Bernhard Dr.
Hello Miao,

short answer: different sample sizes are used in your tests. 
long answer: in your first instance, the common sample size is determined for 
the allowance of 12 lags such that one is not comparing test results derived 
from different sample sizes. And hence, in your second instance, a longer 
sample size has been used.

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von jpm miao
Gesendet: Freitag, 3. Februar 2012 08:45
An: r-help@r-project.org
Betreff: [R] A question on Unit Root Test using "urca" toolbox

Hello,

   I have a question on unit root test with urca toolbox.

   First, to run a unit root test with lags selected by BIC, I type:

> CPILD4UR<-ur.df(x1$CPILD4[5:nr1], type ="drift", lags=12, selectlags 
> ="BIC")
> summary(CPILD4UR)

   The results indicate that the optimal lags selected by BIC is 4.

   Then I run the same unit root test with drift and 4 lags:

> CPILD4UR1<-ur.df(x1$CPILD4[5:nr1], type ="drift", lags =4)
> summary(CPILD4UR1)

   Nevertheless, the results are different. Could anyone tells me why?
   In EViews these two are the same and the results are close to my first case.

Thanks!

A complete log:

> CPILD4UR1<-ur.df(x1$CPILD4[5:nr1], type ="drift", lags =4)
> summary(CPILD4UR1)

###
# Augmented Dickey-Fuller Test Unit Root Test # 
###

Test regression drift


Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)

Residuals:
 Min   1Q   Median   3Q  Max
-2.43555 -0.63440 -0.03048  0.53522  2.84237

Coefficients:
 Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.238631   0.137262   1.739 0.084944
z.lag.1 -0.153030   0.061841  -2.475 0.014881
z.diff.lag1  0.011463   0.090330   0.127 0.899252
z.diff.lag2  0.008764   0.089850   0.098 0.922479
z.diff.lag3  0.149529   0.088930   1.681 0.095546
z.diff.lag4 -0.349870   0.088847  -3.938 0.000145

(Intercept) .
z.lag.1 *
z.diff.lag1
z.diff.lag2
z.diff.lag3 .
z.diff.lag4 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9526 on 109 degrees of freedom
Multiple R-squared: 0.2514, Adjusted R-squared: 0.2171
F-statistic: 7.321 on 5 and 109 DF,  p-value: 5.989e-06


Value of test-statistic is: -2.4746 3.0877

Critical values for test statistics:
  1pct  5pct 10pct
tau2 -3.46 -2.88 -2.57
phi1  6.52  4.63  3.81

> CPILD4UR<-ur.df(x1$CPILD4[5:nr1], type ="drift", lags=12, selectlags 
> ="BIC")
> summary(CPILD4UR)

###
# Augmented Dickey-Fuller Test Unit Root Test # 
###

Test regression drift


Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)

Residuals:
Min  1Q  Median  3Q Max
-2.2551 -0.6335 -0.0372  0.5189  2.8249

Coefficients:
 Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.250966   0.141141   1.778 0.078392
z.lag.1 -0.141102   0.062614  -2.254 0.026388
z.diff.lag1 -0.006698   0.093897  -0.071 0.943274
z.diff.lag2 -0.014133   0.093575  -0.151 0.880251
z.diff.lag3  0.144329   0.091552   1.576 0.118042
z.diff.lag4 -0.355845   0.091441  -3.892 0.000179

(Intercept) .
z.lag.1 *
z.diff.lag1
z.diff.lag2
z.diff.lag3
z.diff.lag4 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9462 on 101 degrees of freedom
Multiple R-squared: 0.2521, Adjusted R-squared: 0.215
F-statistic: 6.808 on 5 and 101 DF,  p-value: 1.647e-05


Value of test-statistic is: -2.2535 2.5438

Critical values for test statistics:
  1pct  5pct 10pct
tau2 -3.46 -2.88 -2.57
phi1  6.52  4.63  3.81

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Package 'fCalendar'

2012-02-23 Thread Pfaff, Bernhard Dr.
Hello Brit and Michael,

indeed, fCalendar was replaced by timeDate (so was fSeries by timeSeries). Old 
versions of both packages are in the CRAN archive. Now, with respect to QRMlib, 
the package author/maintainer (cc'ed to this email) is pretty close to a 
re-submittance of his package to CRAN. Having said this, it might be worth 
waiting before you are trying to get QRMlib running based on the above 
mentioned, but remnoved from CRAN packages.  Scott, do you have a tentative 
schedule for the re-release of your package on CRAN in mind?

Best,
Bernhard


-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von R. Michael Weylandt
Gesendet: Mittwoch, 22. Februar 2012 15:06
An: Britt Grt
Cc: r-help@r-project.org
Betreff: Re: [R] Package 'fCalendar'

I believe fCalendar was replaced by timeDate which does have a namespace and 
can be acquired from CRAN.

Michael

On Wed, Feb 22, 2012 at 5:41 AM, Britt Grt  wrote:
>
> Dear,
>
> I'm a master student mathematics at university Gent, who's writing a thesis 
> about vines and copula's.
> I'm in trouble with the package 'fCalendar' which I need for running 'QRMlib'.
> The problem is that 'fCalendar' doesn't have a namespace. I need to 
> use R.2.14.1 because I also need the package 'vines' which only works for 
> R.2.14.1.
> I'm afraid making a namespace myself is much too complicated, I read 
> much about it, but I really do'nt know how to do it exactly.
> Is it possible to get a version of fCalendar' with namespace, so adjusted for 
> R.2.14.1?
> A tar.gz file would be fine.
> I really hope you can help me,
>
> kind regards,
> Britt Grootaerd
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing package QRMlib

2012-02-28 Thread Pfaff, Bernhard Dr.
Dear all:

well, what Duncan has suggested would work in principle. However, the 
dependencies of QRMlib as contained in the archive have been deprecated and the 
package maintainer (cc'ed to this email directly) is pretty close to a 
re-release of his package on CRAN, whereby primarily the outdated package 
dependency to fSeries is changed to timeSeries. 
Hence, before grabbing the deprecated package dependencies on R-Forge and 
install these, it might be worth waiting for the re-submittance of QRMlib to 
CRAN, given that it will be made in due course. Scott, do you have any further 
information whence QRMlib will be made available again on CRAN?

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von Duncan Murdoch
Gesendet: Montag, 27. Februar 2012 21:16
An: R. Michael Weylandt
Cc: r-help@r-project.org; DT54321
Betreff: Re: [R] Installing package QRMlib

On 27/02/2012 3:01 PM, R. Michael Weylandt wrote:
> Do you perhaps need to add install.packages(...,  type="src")? Just a
> (untested) guess...

That should be type="source", and that should solve the problem, assuming 
Deepan has the necessary tools installed.  If not, he can get them from CRAN in 
the bin/windows/Rtools directory.

Duncan Murdoch

> Michael
>
> On Mon, Feb 27, 2012 at 12:07 PM, DT54321  wrote:
> >  Hi,
> >  I am having real problems downloading the package 'QRMlib'. The 
> > tar.gz file  is shown here:
> >
> >  http://cran.r-project.org/src/contrib/Archive/QRMlib/
> >
> >  I have downloaded this to my local folder and entered the following 
> > command:
> >
> >  nstall.packages("myLocalFolder/QRMlib_1.4.5.1.tar.gz", repos = 
> > NULL)
> >
> >  but I am getting the following error message
> >
> >  Installing package(s) into 'C:/Program Files/R/R-2.14.1/library'
> >  (as 'lib' is unspecified)
> >  Warning in install.packages :
> >error 1 in extracting from zip file  Warning in install.packages 
> > :
> >cannot open compressed file 'QRMlib_1.4.5.1.tar.gz/DESCRIPTION', 
> > probable  reason 'No such file or directory'
> >  Error in install.packages : cannot open the connection
> >
> >  What am I doing wrong??
> >
> >  Thanks
> >
> >  --
> >  View this message in context: 
> > http://r.789695.n4.nabble.com/Installing-package-QRMlib-tp4425269p44
> > 25269.html  Sent from the R help mailing list archive at Nabble.com.
> >
> >  __
> >  R-help@r-project.org mailing list
> >  https://stat.ethz.ch/mailman/listinfo/r-help
> >  PLEASE do read the posting guide 
> > http://www.R-project.org/posting-guide.html
> >  and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing package QRMlib

2012-02-28 Thread Pfaff, Bernhard Dr.
As stated, you need to install the *deprecated* dependencies of QRMlib as shown 
in its DESCRIPTION as well as the reverse dependent *deprecated* packages. 
These can still be fetched from R-Forge (Rmetrics project). The package 
'timeSeries' will become a dependency of the to be re-released QRMlib package 
on CRAN.  

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von DT54321
Gesendet: Dienstag, 28. Februar 2012 11:10
An: r-help@r-project.org
Betreff: Re: [R] Installing package QRMlib

Thanks for the reply guys. Well, I've tried the following command after 
installing the package dependancies including timeSeries:

install.packages(file_name, type = "source", repos = NULL)
 
Ans still no luck...I get the following error message:

Installing package(s) into ‘C:/Program Files/R/R-2.14.1/library’
(as ‘lib’ is unspecified)
* installing *source* package 'QRMlib' ...
** Creating default NAMESPACE file
** libs
ERROR: compilation failed for package 'QRMlib'
* removing 'C:/Program Files/R/R-2.14.1/library/QRMlib'
* restoring previous 'C:/Program Files/R/R-2.14.1/library/QRMlib'
Warning in install.packages :
  running command 'C:/PROGRA~1/R/R-214~1.1/bin/i386/R CMD INSTALL -l
"C:/Program Files/R/R-2.14.1/library"   
"my_local_folder/QRMlib_1.4.5.1.tar.gz"' had status 1 Warning in 
install.packages :
  installation of package ‘my_local_folder/QRMlib_1.4.5.1.tar.gz’ had non-zero 
exit status

Any ideas??

--
View this message in context: 
http://r.789695.n4.nabble.com/Installing-package-QRMlib-tp4425269p4427627.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing package QRMlib

2012-02-29 Thread Pfaff, Bernhard Dr.
Well, because QRMlib interfaces C routines (IIRC), the error message is pretty 
indicative, i.e. these routines cannot be compiled. Now, without further 
information there is not much to recommend, but:

1) check your RTools installation
2) Ask the package maintainer (cc'ed) when he will re-release QRMlib on CRAN, 
be patient until the binaries have been populated and use install.packages()

Best,
Bernhard

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von DT54321
Gesendet: Mittwoch, 29. Februar 2012 14:17
An: r-help@r-project.org
Betreff: Re: [R] Installing package QRMlib

I wouldn't see myself as an experienced R user soI would appreciate if anyone 
is able to give me a clear set of instructions on how to install and load 
QRMlib. The steps I've followed are:

1: Download 'QRMlib_1.4.5.1.tar.gz' from 
http://cran.r-project.org/src/contrib/Archive/QRMlib/ to my local folder.

2. QRMlib depends on methods, fCalendar, fEcofin, mvtnorm, chron,its,Hmisc so I 
install all of these. I install teh following from within RStuidos using the  
Packages tab:

a) mvtnorm
b) chron
c) its
d) Hmisc
e) methods

And download the tar.gz files from the archive for:

A) fCalendar
b) fEcofin

In R, I also use the install.packages with type = "source" and repos = NULL to 
install the packages listed in A) and B)

3. In R, then enter

iinstall.packages(file_name, type ="source", repos = NULL)

where file_name is the directory for QRMlib_1.4.5.1.tar.gz.

It outputs the following error:

Installing package(s) into ‘C:/Program Files/R/R-2.14.1/library’
(as ‘lib’ is unspecified)
* installing *source* package 'QRMlib' ...
** Creating default NAMESPACE file
** libs
ERROR: compilation failed for package 'QRMlib'
* removing 'C:/Program Files/R/R-2.14.1/library/QRMlib'
* restoring previous 'C:/Program Files/R/R-2.14.1/library/QRMlib'
Warning in install.packages :
  running command 'C:/PROGRA~1/R/R-214~1.1/bin/i386/R CMD INSTALL -l 
"C:/Program Files/R/R-2.14.1/library"  
"my_local_folder/QRMlib_1.4.5.1.tar.gz"' had status 1 Warning in 
install.packages :
  installation of package ‘Pmy_local_folder/QRMlib_1.4.5.1.tar.gz’ had non-zero 
exit status

Please help!

--
View this message in context: 
http://r.789695.n4.nabble.com/Installing-package-QRMlib-tp4425269p4431453.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How are the coefficients for the ur.ers, type DF-GLS calculated?

2012-03-01 Thread Pfaff, Bernhard Dr.
Ackbar:

have a look at ur.ers directly. The coefficients can be recovered from the slot 
'testreg', i.e.,

example(ur.ers)
slotNames(ers.gnp)
coef(ers.gnp@testreg)

RTFM: help("ur.ers") and help("ur.ers-class")

Best,
Bernhard


-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von ackbar03
Gesendet: Mittwoch, 29. Februar 2012 17:20
An: r-help@r-project.org
Betreff: [R] How are the coefficients for the ur.ers, type DF-GLS calculated?

I need some real help on this, really stuck

how are the coefficients for
ur.ers(y, type = c("DF-GLS", "P-test"), model = c("constant", "trend"),
   lag.max = 0)

The max lag is set at zero, so the regression should simply be

Diff(zt) = a*z(t-1)

where a is the value i'm trying to find and z(t)'s are the detrended values.
but through performing my own regression on the two time series I get different 
values. This could only mean

1) Its not just a simple regression
or
2) I'm detrending my data incorrectly.

However, i've followed the instructions I've seen in research papers and it 
doesn't seem to be right. Basically I take Y*t = Yt-(1-(1-7/T)*Y(t-1) and 
regress that on 1-(1-7/T)  for all t>1 and leave the values at T=1 unchanged. 
Then I take Yt and subtract the coefficient of the regression to get the 
detrended value.

I'm really stuck on this and its really frustrating. I think the easiest thing 
would be if someone can tell me exactly how R carries out the calculations for 
the functions. Help will be highly appreciated!!


--
View this message in context: 
http://r.789695.n4.nabble.com/How-are-the-coefficients-for-the-ur-ers-type-DF-GLS-calculated-tp4432015p4432015.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Simulate values from VAR

2012-03-01 Thread Pfaff, Bernhard Dr.
Hello Keith,

see ?Acoef for retrieving the coefficients. Incidentally, in the package dse 
simulation methods are made available.

Best,
Bernhard


Dr. Bernhard Pfaff
Director
Global Asset Allocation

Invesco Asset Management Deutschland GmbH
An der Welle 5
D-60322 Frankfurt am Main

Tel: +49 (0)69 29807 230
Fax: +49 (0)69 29807 178
www.institutional.invesco.com
Email: bernhard_pf...@fra.invesco.com

Geschäftsführer: Karl Georg Bayer, Bernhard Langer, Dr. Jens Langewand, 
Alexander Lehmann, Christian Puschmann
Handelsregister: Frankfurt am Main, HRB 28469
Sitz der Gesellschaft: Frankfurt am Main



*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] FRB/US

2008-06-19 Thread Pfaff, Bernhard Dr.
Hello Tony,

I am not aware of an out-of-the-box solution to your problem. However, in terms 
of macroeconometric simultaneous equation models, I have used the FP-program 
(see: http://fairmodel.econ.yale.edu/fp/fp.htm). Prof. Fair is so kind to 
provide the binaries and sources from his web-site. You could then cast the 
FRB-model in the FP-scripting language and access/run this file from R. At 
least this worked for me. 
Not very pleasing, but an option would be to port the FORTRAN code to an R 
package.

Hope this is a first helpful pointer for you.

Best,
Bernhard 

>-Ursprüngliche Nachricht-
>Von: [EMAIL PROTECTED] 
>[mailto:[EMAIL PROTECTED] Im Auftrag von Tony Burns
>Gesendet: Mittwoch, 18. Juni 2008 20:07
>An: r-help@r-project.org
>Betreff: [R] FRB/US
>
>I would like to run the Federal Reserves econometric model on 
>open source
>software - they sent me the specifications for the model and what is
>neccessary to run the model on TROLL (which is a commercially available
>econometric software)
>
>I am looking into the feasability of doing this and whether or 
>not R is the
>right tool - I would like to discuss the idea with someone who 
>is familiar
>with this sort of thing as I have limited experience running 
>econometric
>models
>
>Thanks in advance to anyone who can help - if I can figure out 
>how to do
>this I would like to make a ready to run version of the FRB/US 
>model an open
>source project
>
>cheers,
>Tony
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Cointegration no constant

2008-03-20 Thread Pfaff, Bernhard Dr.
>
>Hi,
>
>I am trying to estimate a VECM without constant using the 
>following code:
>
>data(finland)
>sjf <- finland
>sjf.reg<-ca.jo(sjf, type = c("eigen"), ecdet = c("none"), K = 
>2,spec=c("transitory"), season = NULL, dumvar = NULL)
>cajools(sjf.reg)
>
>
>While the cointegration test does not use a constant, it is 
>used in the cajools which I do not want. I am sure I am doing 
>something wrong - what should I change?

Hello Ralph,

are you really sure that you do want to include a constant in cajools?
If so, this can be swiftly accomplished by:

[EMAIL PROTECTED] <- [EMAIL PROTECTED], -1]
cajools(sjf.reg)

i.e., you drop the constant from the slot Z1 (have a look at cajools to
see how the regression are set up). You might want consider using
cajorls, too.


Best,
Bernhard

>
>Any help very much appreciated!
>
>Ralph
>_
>Need to know the score, the latest news, or you need your 
>Hotmail(r)-get your "fix".
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multivariate time series

2008-04-18 Thread Pfaff, Bernhard Dr.
Hello Erin,

have you considered the package bundle "dse" on CRAN?


Best,
Bernhard

>
>Dear R People:
>
>I was looking to see if there are any functions for Vector 
>ARMA modeling.
>
>I found Vector AR(p) but no Vector ARMAs.
>
>Thanks,
>Erin
>
>
>-- 
>Erin Hodgess
>Associate Professor
>Department of Computer and Mathematical Sciences
>University of Houston - Downtown
>mailto: [EMAIL PROTECTED]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] question regarding arima function and predicted values

2007-12-12 Thread Pfaff, Bernhard Dr.

>Good evening!
>
>I have a question regarding  forecast package and time series analysis.
>My syntax:
>
>x<-c(253, 252, 275, 275, 272, 254, 272, 252, 249, 300, 244, 
>258, 255, 285, 301, 278, 279, 304, 275, 276, 313, 292, 302, 
>322, 281, 298, 305, 295, 286, 327, 286, 270, 289, 293, 287, 
>267, 267, 288, 304, 273, 264, 254, 263, 265, 278)
>library(forecast)
>arima(x, order=c(1,1,2), seasonal=list(order=c(0,1,0), period=12))->l
>auto.arima(x)->k
>sd(l$resid)
>sd(k$resid)
>predict(l,n.ahead=1)
>predict(k,n.ahead=1)
>
>1. I understand that auto.arima will find the best time series 
>model choosing the smaller AIC, BIC and AICc from competing 
>models, but my model finds a smaller AIC than that of the 
>auto.arima. but the sd of the residuals for my model is 
>somehow bigger. 
>Why? Am I missing something? 
>Now the sd of the residuals for my model is somehow bigger, as 
>well as the se for the predicted value.  What model would you 
>choose between this two and why?   
>

Hello Eugen,

in a nutshell, I would not use neither of these models, but an ARMA(1,
0, 1) fitted to the log(x). Now, to your questions. If you use the
"trace = TRUE" argument in auto.arima(), you will see that your model
specification (l) is not tested. Why is this? Because, you supply a
vector and the frequency is 1 (i.e. frequency(x). If you now spot at the
code in auto.arima() it is clear that seasonal differences are not
tested for. 

Try this instead:

x <- ts(x, frequency = 12)
k <- auto.arima(x, D = 1, trace = TRUE)
logLik(k)
k$aic

Hence, this yields an ARIMA(1, 0, 1)(2, 1, 0)[12] as an "optimal" model
specification, which yields an even "better" result than your l model.
However, the results you report for l and k can be attributed to
over-fitting / over-differencing. If you examine your series more
closely:

plot(x)
acf(x)
pacf(x)
library(urca)
ur.kpss(x)
plot(ur.za(x))

i.e. the traditional approach for the identification stage in the
Box-Jenkins approach, you will detect, that
1) The series seems not to be stationary with respect to its variance,
but is not "trending".
2) ACF and PACF tapers off slowly and neither has a single spike nor
gives the PACF hindsight of seasonality.
3) Your series is stationary with a structural break.


Therefore, one can use the log-transform of x for variance stabilisation
and specify an ARMA(1, 0, 1)-model:

xl <- log(x)
m <- arima(xl, order=c(1, 0, 1))
m


Best,
Bernhard


>2. This question is more theoretical 
>
> m<-sample(c(10:20),10,replace=T)
> f<-sample(c(10:20),10,replace=T)
> t<-m+f
> s<-rbind(m,f,t)
> s
>
>Let's say I have a panel sample at disposal and consider m to 
>be the monthly average quantity of juice consumption for the  
>male part of the sample and f to be the monthly average 
>quantity of juice consumption for the  female part of the 
>sample, and t the average quantity of juice consumption for 
>the whole sample. For the mean of the whole sample i have a 
>confidence interval of say +/-2 each month (say I have a 
>sample of 2000 individuals). If I try to come up with a 
>confidence interval only for the male population (which in my 
>sample is  say 1000) it would certainly by bigger, because i 
>now have a male sample of 1000 for determining the mean 
>consumption for the whole male population. So my confidence 
>interval is bigger for mean male consumption than for the 
>whole sample (because N declines from 2000 to 1000). Now if I 
>tried to predict the the next month's consumption for both my 
>time series (male and whole sample) the prediction would not 
>"care" that when establishing the
> mean consumption i used first 2000 people and then 1000. Am I right?
>Imagine that each month (from 10 that I sampled above) has 
>such a confidence interval of +/-3. Now how would a future 
>prediction would incorporate this fact: that my mean 
>consumption is not measured via a Census, but using a sample, 
>and that the number is an estimation of the real consumption, 
>within a confidence interval?
>Is there a good reference text for this incorporation of the 
>confidence interval  of past values in determining  the future 
>values ? 
>
>Thank you and have a great day!
>
>
>
>
>   
>-
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need good Reference Material and Reading about Gaussian Copulas

2007-12-12 Thread Pfaff, Bernhard Dr.
Hello Neil,

you will find decent and well-written papers on:

http://www.math.ethz.ch/~embrecht/

http://www.ma.hw.ac.uk/~mcneil/

http://www.math.uni-leipzig.de/~tschmidt/#publications


Best,
Bernhard

ps: Incidentally, the monograph http://press.princeton.edu/titles/8056.html 
contains nice illustrations too. See packages QRMlib, Copula (JSS: Enjoy the 
Joy of Copulas: With a Package copula, Vol. 21, Issue 4, Oct 2007), 
mlCopulaSelection, sbgcop and fCopulae on CRAN for implementations of copulae. 
  

>-Ursprüngliche Nachricht-
>Von: [EMAIL PROTECTED] 
>[mailto:[EMAIL PROTECTED] Im Auftrag von 
>[EMAIL PROTECTED]
>Gesendet: Mittwoch, 12. Dezember 2007 16:25
>An: R-help@r-project.org
>Betreff: [R] Need good Reference Material and Reading about 
>Gaussian Copulas
>
>Can anyone advise me on some pratical papers or books 
>On Gaussian Copulas? Anything in the genre of Copulas Dummies
>Would be a help.
>
>As simpe, and approachable with minimal pedantic style.
>Thanks,
>Neil
>
>
>
>
>This information is being sent at the recipient's 
>reques...{{dropped:16}}
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to use R to estimate a model which has two sets of laggedtime series independent variables

2007-12-13 Thread Pfaff, Bernhard Dr.
>Hi,
>
>I would like to use R to estimate the following model:
>
>X(t) = a + b1*X(t-1) + b2*X(t-2) + c1*Y(t) + c2*Y(t-1) + c3*Y(t-2)
>
>Is there any R function that performs this type of estimation? I know
>that if I only have one time series (i.e. lagged value of X) on the
>right hand side then there are R functions to do the estimation. I am
>thinking a work around by preparing X(t-1), X(t-2),Y(t),Y(t-1) and
>Y(t-2) as five independent variables and use the lm() function to
>performance the estimation. Please advise. Thanks.
>
>Michael
>

Hello Michael,

you can use the function dynlm() contained in the CRAN-package with the
same name, or you can use the function VAR() contained in the package
vars. Here, you would only need the lm-object belonging to X_T as
lhs-variable.

Best,
Bernhard



>This e-mail message including any attachments may be legally 
>privileged and confidential under applicable law, 
>and is meant only for the intended recipient(s).  If you 
>received this message in error, please reply to the sender, 
>adding "SENT IN ERROR" to the subject line, then delete this message. 
>Thank you.
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stationarity of a Time Series

2008-01-22 Thread Pfaff, Bernhard Dr.
Hello Stephen,

stationarity tests as well as unit root tests have been implemented in a
couple of packages. For instance, as already mentioned: tseries, but
also uroot, fUnitRoots and urca. See the annotated task view
"Econemtrics" and "Finance" for further information.

Best,
Bernhard 

>
>kpss.test in the tsereis package should do the trick
>
>On Jan 21, 2008 12:36 PM, stephen sefick <[EMAIL PROTECTED]> wrote:
>
>> Does anyone know of a test for stationarity of a time series, or like
>> all ordination techniques it is a qualitative assessment of a
>> quantitative result.  Books, papers, etc. suggestions welcome.
>> thanks
>>
>> Stephen
>>
>> --
>> Let's not spend our time and resources thinking about things that are
>> so little or so large that all they really do for us is puff 
>us up and
>> make us feel like gods.  We are mammals, and have not exhausted the
>> annoying little problems of being mammals.
>>
>>  
>  -K. Mullis
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dynlm and lm: should they give same estimates?

2008-10-16 Thread Pfaff, Bernhard Dr.
Hello Werner,

this is easily clarified. The code in my book contains an error: please
replace the line:

error.lagged <- error[-c(99, 100)]
with
error.lagged <- error[-c(1, 100)]


I will file this in the errata section on my web-site and will correct
the relevant example in the urca and vars packages for their next
releases.


Best,
Bernhard

>
>Hi,
>
>I was wondering why the results from lm and dynlm are not the 
>same for what I think is the same model. 
>I have just modified example 4.2 from the Pfaff book, please 
>see below for the code and results.
>
>Can anyone tell my what I am doing wrongly?
>
>Many thanks,
>  Werner
>
>set.seed(123456)
>e1 <- rnorm(100)
>e2 <- rnorm(100)
>y1 <- ts(cumsum(e1))
>y2 <- ts(0.6*y1 + e2)
>lr.reg <- lm(y2 ~ y1)
>error <- ts(residuals(lr.reg))
>error.lagged <- error[-c(99, 100)]
>
>dy1 <- diff(y1)
>dy2 <- diff(y2)
>diff.dat <- data.frame(embed(cbind(dy1, dy2), 2))
>colnames(diff.dat) <- c('dy1', 'dy2', 'dy1.1', 'dy2.1')
>ecm.reg <- lm(dy2 ~ error.lagged + dy1.1 + dy2.1,
>  data=diff.dat)
>ecm.dynreg <- dynlm(d(y2) ~ L(error) + L(d(y1),1) + L(d(y2),1))
>summary(ecm.reg)
>summary(ecm.dynreg)
>
>> summary(ecm.reg)
>
>Call:
>lm(formula = dy2 ~ error.lagged + dy1.1 + dy2.1, data = diff.dat)
>
>Residuals:
>Min  1Q  Median  3Q Max 
>-2.9588 -0.5439  0.1370  0.7114  2.3065 
>
>Coefficients:
>  Estimate Std. Error t value Pr(>|t|)
>(Intercept)   0.003398   0.103611   0.0330.974
>error.lagged -0.968796   0.158554  -6.110 2.24e-08 ***
>dy1.1 0.808633   0.112042   7.217 1.35e-10 ***
>dy2.1-1.058913   0.108375  -9.771 5.64e-16 ***
>---
>Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
>
>Residual standard error: 1.026 on 94 degrees of freedom
>Multiple R-Squared: 0.5464, Adjusted R-squared: 0.5319 
>F-statistic: 37.74 on 3 and 94 DF,  p-value: 4.243e-16 
>
>> summary(ecm.dynreg)
>
>Time series regression with "ts" data:
>Start = 3, End = 100
>
>Call:
>dynlm(formula = d(y2) ~ L(error) + L(d(y1), 1) + L(d(y2), 1))
>
>Residuals:
>Min  1Q  Median  3Q Max 
>-2.9588 -0.5439  0.1370  0.7114  2.3065 
>
>Coefficients:
> Estimate Std. Error t value Pr(>|t|)
>(Intercept)  0.003398   0.103611   0.033   0.9739
>L(error)-0.968796   0.158554  -6.110 2.24e-08 ***
>L(d(y1), 1)  0.245649   0.126996   1.934   0.0561 .  
>L(d(y2), 1) -0.090117   0.105938  -0.851   0.3971
>---
>Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
>
>Residual standard error: 1.026 on 94 degrees of freedom
>Multiple R-Squared: 0.5464, Adjusted R-squared: 0.5319 
>F-statistic: 37.74 on 3 and 94 DF,  p-value: 4.243e-16 
>
>> 
>
>
>
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using dvi with latex object: directory not correctly set, maybe due to error in shQuote()

2008-12-17 Thread Pfaff, Bernhard Dr.
Hello Marco,

as might not be evident at first sight, but have you set the environment 
variable "R_SHELL"? If you spot at the dvi method for latex you will find a 
call to sys(), which will call shell() and if the argument shell is unset then 
the contents of "R_SHELL" will be used. Hence, what does:

Sys.getenv("R_SHELL")

yield at your machine? I reckon "" will be returned. Therefore as a first step: 
Sys.setenv(R_SHELL = "cmd.exe") or permanently in your R environment file. 
Having done so and running dvi(latex.obj) now produces at least not the warning 
that everything beyond "cd" is skipped and the command via paste is parsed. The 
next problem is the path the randomly generated file that latex cannot handle. 
The following alternative might work for you too:

dvi.latex2 <- function (object, prlog = FALSE, nomargins = TRUE, width = 5.5, 
height = 7, ...) 
{
fi <- object$file
sty <- object$style
if (length(sty)) 
sty <- paste("\\usepackage{", sty, "}", sep = "")
if (nomargins) 
sty <- c(sty, paste("\\usepackage[paperwidth=", width, 
"in,paperheight=", height, "in,noheadfoot,margin=0in]{geometry}", 
sep = ""))
tmp <- tempfile(tmpdir = tempdir())
tmptex <- paste(tmp, "tex", sep = ".")
infi <- readLines(fi, n = -1)
cat("\\documentclass{report}", sty, "\\begin{document}\\pagestyle{empty}", 
infi, "\\end{document}\n", file = tmptex, sep = "\n")
sc <- if (under.unix) {
"&&"
} else {
"&"
}
shell(paste("cd", shQuote(tempdir()), sc, optionsCmds("latex"), 
"-interaction=scrollmode", shQuote(tmp)), translate = TRUE)
if (prlog) 
cat(scan(paste(tmp, "log", sep = "."), list(""), sep = "\n")[[1]], 
sep = "\n")
fi <- paste(tmp, "dvi", sep = ".")
structure(list(file = fi), class = "dvi")
}


And therefore:

tbl.loc   <- matrix(1:4, ncol=2)
latex.obj <- latex(tbl.loc)
tempdir <- function(){"H:/PROJECTS/data"}
Sys.getenv("R_SHELL")
Sys.setenv(R_SHELL = "cmd.exe")
Sys.getenv("R_SHELL")
## options(xdvicmd='dviout') set appropriately I use TeXLive and have not yap 
installed; 
## working with MikTeX there should be no need to change the default viewer
dvi.latex2(latex.obj)
## It might be the case that the dvi file is not displayed immediately after 
production but can be opened 
## manually


Does this work for your? Probably it is also a good idea to address this 
problem directly to the package maintainer (already cc'ed).

Best,
Bernhard


>
>Dear friends of R,
>
>I want to produce a pdf file with the contents of a matrix. I 
>employ the latex command in combination with dvi, both 
>contained in the Hmisc package. It seems to me that the 
>function does not correctly set the directory. 
>
>> tbl.loc   <- matrix(1:4, nc=2)
>> latex.obj <- latex(tbl.loc)
>> dvi(latex.obj)
>warning: extra args ignored after 'cd'
>H:\PROJECTS\data
>warning: extra args ignored after 'yap'
>
>When I have a look at the function dvi.latex I find the 
>following line which, I guess, is meant to set the new 
>directory and to run latex.
>
>sys(paste("cd", shQuote(tempdir()), sc, optionsCmds("latex"), 
>"-interaction=scrollmode", shQuote(tmp)), output = FALSE)
>
>Running just the piece shQuote(tempdir()) returns
>> shQuote(tempdir())
>[1] "\"C:\\DOKUME~1\\ferimawi\\LOKALE~1\\Temp\\Rtmpr4CG3A\""
>> tempdir()
>[1] "C:\\DOKUME~1\\ferimawi\\LOKALE~1\\Temp\\Rtmpr4CG3A"
>
>Is the leading "\" causing the problem? How can I fix the problem?
>
>The R-help dealt with a related problem some while ago but I 
>do not think that it resolves my problem:
>http://finzi.psych.upenn.edu/R/Rhelp02a/archive/62975.html
>
>I am using Windows XP, R version 2.7.2 (2008-08-25) and Hmisc 
>version 3.4-4.
>
>Thanks in advance for your help.
>Regards
>Marco
>
>Marco Willner 
>Senior Analyst Quantitative Asset Allocation 
>Feri Finance AG 
>Haus am Park 
>Rathausplatz 8-10 
>D-61348 Bad Homburg v.d.H 
>Tel: +49 (6172) 916 3037 
>Fax: +49 (6172) 916 1037 
>E-Mail: marco.will...@feri.de 
>Internet: www.feri.de 
>Handelsregister des Amtsgerichts Bad Homburg v.d.H. (HRB 7473) 
>Vorstände: Michael Stammler (Sprecher), Dr. Matthias Klöpper, 
>Dr. Helmut Knepel, Dr. Heinz-Werner Rapp, Arndt Thorn 
>Vorsitzender des Aufsichtsrates: Dr. Uwe Schroeder-Wildberg  
>Disclaimer: 
>Diese Nachricht enthält vertrauliche und/oder ausschließlich 
>für den Adressaten bestimmte Informationen. Wenn Sie nicht der 
>vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein 
>sollten, so beachten Sie bitte, dass jede Form der 
>Kenntnisnahme, Veröffentlichung, Vervielfältigung oder 
>Weitergabe des Inhalts dieser E-Mail und der E-Mail selber 
>unzulässig ist. Sollten Sie diese E-Mail irrtümlich erhalten 
>haben, so bitten wir Sie, den Absender unverzüglich durch 
>Antwort-E-Mail oder Anruf unter +49 (6172) 916-0 zu 
>informieren und diese Nachricht zu löschen. Soweit nicht 
>anderweitig angegeben, ist diese Nachricht weder ein Angebo

Re: [R] Download daily weather data

2009-02-27 Thread Pfaff, Bernhard Dr.
Dear Thomas,

more for the sake of completeness and as an alternative to R. There are GRIB 
data [1] sets available (some for free) and there is the GPL software Grads 
[2]. Because the Grib-Format is well documented it should be possible to get it 
into R easily and make up your own plots/weather analyis. I do not know and 
have not checked if somebody has already done so.

I use this information/tools aside of others during longer-dated off-shore 
sailing.

Best,
Bernhard 

[1] http://www.grib.us/
[2] http://www.iges.org/grads/

>-Ursprüngliche Nachricht-
>Von: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] Im Auftrag von Scillieri, John
>Gesendet: Donnerstag, 26. Februar 2009 22:58
>An: 'James Muller'; 'r-help@r-project.org'
>Betreff: Re: [R] Download daily weather data
>
>Looks like you can sign up to get XML feed data from Weather.com
>
>http://www.weather.com/services/xmloap.html
>
>Hope it works out!
>
>-Original Message-
>From: r-help-boun...@r-project.org 
>[mailto:r-help-boun...@r-project.org] On Behalf Of James Muller
>Sent: Thursday, February 26, 2009 3:57 PM
>To: r-help@r-project.org
>Subject: Re: [R] Download daily weather data
>
>Thomas,
>
>Have a look at the source code for the webpage (ctrl-u in firefox,
>don't know in internet explorer, etc.). That is what you'd have to
>parse in order to get the forecast from this page. Typically when I
>parse webpages such as this I use regular expressions to do so (and I
>would never downplay the usefulness of regular expressions, but they
>take a little getting used to). There are two parts to the task: find
>patterns that allow you to pull out the datum/data you're after; and
>then write a program to pull it/them out. Also, of course, download
>the webpage (but that's no issue).
>
>I bet you'd be able to find a comma separated value (CSV) file
>containing the weather report somewhere, which would probably involve
>a little less labor in order to produce your automatic wardrobe
>advice.
>
>James
>
>
>
>On Thu, Feb 26, 2009 at 3:47 PM, Thomas Levine 
> wrote:
>> I'm writing a program that will tell me whether I should wear a coat,
>> so I'd like to be able to download daily weather forecasts and daily
>> reports of recent past weather conditions.
>>
>> The NOAA has very promising tabular forecasts
>> 
>(http://forecast.weather.gov/MapClick.php?CityName=Ithaca&state
=NY&site=BGM&textField1=42.4422&textField2=-76.5002&e=0>&FcstType=digital),
>> but I can't figure out how to import them.
>>
>> Someone must have needed to do this before. Suggestions?
>>
>> Thomas Levine!
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
 This e-mail and any attachments are confidential, may 
>contain legal, professional or other privileged information, 
>and are intended solely for the addressee.  If you are not the 
>intended recipient, do not use the information in this e-mail 
>in any way, delete this e-mail and notify the sender. CEG-IP1
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multivariate GARCH Package

2009-03-04 Thread Pfaff, Bernhard Dr.
Dear Mohammad,

have a look at the finance task view on CRAN:
http://cran.at.r-project.org/web/views/Finance.html
(Dirk has nicely updated this page recently). 

In addition, Patrick Burns provides a recipe for PC-GARCH models on his 
web-site: 
http://www.burns-stat.com/pages/Working/multgarchuni.pdf


HTH,
Bernhard

>
>Good day everyone,
> 
>I tried to find a multivariate GARCH package and failed to 
>find one. Although when I searched R I found the following 
>link which describes the package:
> 
>http://www.r-project.org/user-2006/Slides/Schmidbauer+Tunalioglu.pdf
> 
>can any one help me with this issue.
> 
>Thank you in advance
>   [[alternative HTML version deleted]]
>
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ARCH LM test for univariant time series

2008-02-04 Thread Pfaff, Bernhard Dr.
Dear All,


one can visually inspect ARCH-effects by plotting acf/pacf of the
squared residuals from an OLS-estimation. This can be as simple as a
demeaned series. Further one can run an auxiliary regression by
regressing q lagged squared values and a constant on the squared series
itself. This test statistic (N-q)*R^2 is distributed as chisq with q
degrees of freedom.  

Something along the lines:

archlmtest <- function (x, lags, demean = FALSE) 
{
  x <- as.vector(x)
  if(demean) x <- scale(x, center = TRUE, scale = FALSE)
lags <- lags + 1
mat <- embed(x^2, lags)
arch.lm <- summary(lm(mat[, 1] ~ mat[, -1]))
STATISTIC <- arch.lm$r.squared * length(resid(arch.lm))
names(STATISTIC) <- "Chi-squared"
PARAMETER <- lags - 1
names(PARAMETER) <- "df"
PVAL <- 1 - pchisq(STATISTIC, df = PARAMETER)
METHOD <- "ARCH LM-test"
result <- list(statistic = STATISTIC, parameter = PARAMETER, 
p.value = PVAL, method = METHOD, data.name =
deparse(substitute(x)))
class(result) <- "htest"
return(result)
}

should work and yield equal results as mentioned earlier in this thread.

Best,
Bernhard


>
>Spencer,
>
>The warning message is sent from VAR, it basically lets you 
>know that the
>data it used had no column names and it had to supply them 
>using y1, y2, y3,
>etc. It can be suppressed by including options(warn=-1) in the 
>function.
>
>Anyway, it seems that the p value from my function does not match
>FinMetrics'. I guess the function doesn't work... hmm...
>
>
>On 2/2/08, Spencer Graves <[EMAIL PROTECTED]> wrote:
>>
>> Dear Tom:
>>
>>  Your revised function eliminates the discrepancy in the 
>degrees of
>> freedom but is still very different from the numbers reports 
>on Tsay, p.
>> 102:
>>
>> archTest(log(1+as.numeric(m.intc7303)), lag=12)
>>
>>ARCH test (univariate)
>>
>> data:  Residual of y1 equation
>> Chi-squared = 13.1483, df = 12, p-value = 0.3584
>>
>> Warning message:
>> In VAR(s, p = 1, type = "const") :
>> No column names supplied in y, using: y1, y2, y3, y4, y5, y6, y7, y8,
>> y9, y10, y11, y12 , instead.
>>
>>
>>  TOM:  What can you tell me about the warning message?
>>
>>  Thanks for your help with this.
>>  Spencer Graves
>>
>> tom soyer wrote:
>> > Spencer,
>> >
>> > Sorry, I forgot that the default lag in arch is 16. Here 
>is the fix. Can
>> you
>> > try it again and see if it gives the correct (or at least similar
>> compared
>> > to a true LM test) result?
>> >
>> > archTest=function(x, lags=12){
>> >  #x is a vector
>> >  require(vars)
>> >  s=embed(x,lags)
>> >  y=VAR(s,p=1,type="const")
>> >  result=arch(y,lags.single=lags,multi=F)$arch.uni[[1]]
>> >  return(result)
>> > }
>> >
>> > Thanks and sorry about the bug.
>> >
>> >
>> > On 2/2/08, Spencer Graves <[EMAIL PROTECTED]> wrote:
>> >
>> >> Dear Tom, Bernhard, Ruey:
>> >>
>> >>  I can't get that to match Tsay's example, but I have other
>> >> questions about that.
>> >>
>> >>  1.  I got the following using Tom's 'archTest' 
>function (below):
>> >>
>> >>
>> >>> archTest(log(1+as.numeric(m.intc7303)), lags=12)
>> >>>
>> >>ARCH test (univariate)
>> >>
>> >> data:  Residual of y1 equation
>> >> Chi-squared = 10.8562, df = 16, p-value = 0.8183
>> >>
>> >> Warning message:
>> >> In VAR(s, p = 1, type = "const") :
>> >> No column names supplied in y, using: y1, y2, y3, y4, y5, 
>y6, y7, y8,
>> >> y9, y10, y11, y12 , instead.
>> >>
>> >>
>> >>   ** First note that the answer has df = 16, even though I
>> >> supplied lags = 12.
>> >>
>> >>  2.  For (apparently) this example, S-Plus FinMetrics 
>'archTest'
>> >> function returned "Test for ARCH Effects:  LM Test.  Null 
>Hypothesis:
>> >> no ARCH effects.  Test Stat 43.5041, p.value 0..  
>Dist. under Null:
>> >> chi-square with 12 degrees of freedom".
>> >>
>> >>  3.  Starting on p. 101, Ruey mentioned "the Lagrange 
>multiplier
>> >> test of Engle (1982)", saying "This test is equivalent to 
>the usual F
>> >> test for" no regression, but refers it to a chi-square, not an F
>> >> distribution.  Clearly, there is a gap here, because the 
>expected value
>> >> of the F distribution is close to 1 [d2/(d2-2), where d2 
>= denominator
>> >> degrees of freedom;  http://en.wikipedia.org/wiki/F-distribution],
>> while
>> >> the expected value for a chi-square is the number of 
>degrees of freedom
>> >>
>> >>  Unfortunately, I don't feel I can afford the time to 
>dig into this
>> >> further right now.
>> >>
>> >>  Thanks for your help.
>> >>  Spencer Graves
>> >>
>> >> tom soyer wrote:
>> >>
>> >>> Spencer, how about something like this:
>> >>>
>> >>> archTest=function (x, lags= 16){
>> >>>  #x is a vector
>> >>>  require(vars)
>> >>>  s=embed(x,lags)
>> >>>  y=VAR(s,p=1,type="const")
>> >>>  result=arch(y,multi=F)$arch.uni[[1]]
>> >>>  return(result)
>> >>> }
>> >>>
>> >>> can you, or maybe Bernhard, check and see whether this 
>function gives
>> >>> the correct result?
>> >>>
>> >>> thanks,
>

Re: [R] ARCH LM test for univariant time series

2008-02-06 Thread Pfaff, Bernhard Dr.
Hello Spencer,

splendid. Please go ahead. I am wondering if one should return the lm-object 
too and not only the htest-object. The benefit would be, that 
summary(lm-object) would return the mentioned F-test in the R-Help thread too, 
or one can return just the F-test result as a separate list element. If so, a 
more appropriate function name would be archtest().

What do you think?

Best,
Bernhard

Dr. Bernhard Pfaff
International Structured Products Group
Director

Invesco Asset Management Deutschland GmbH
Bleichstrasse 60-62
D-60313 Frankfurt am Main

Tel: +49(0)69 29807 230
Fax: +49(0)69 29807 178
Email: [EMAIL PROTECTED]

Geschäftsführer: Karl Georg Bayer, Bernhard Langer, Dr. Jens Langewand, 
Alexander Lehmann, Christian Puschmann
Handelsregister: Frankfurt am Main, HRB 28469
Sitz der Gesellschaft: Frankfurt am Main
  

>-Ursprüngliche Nachricht-
>Von: Spencer Graves [mailto:[EMAIL PROTECTED] 
>Gesendet: Mittwoch, 6. Februar 2008 05:02
>An: Pfaff, Bernhard Dr.
>Cc: tom soyer; r-help@r-project.org
>Betreff: Re: AW: [R] ARCH LM test for univariant time series
>
>Dear Bernhard: 
>
>  Thanks very much.  Unless you object, I shall add it to the 
>'FinTS' library as "ArchTest" (comparable to the S-PLUS Finmetrics 
>'archTest' function) -- with a worked example in '\scripts\ch03.R'. 
>
>  Best Wishes,
>  Spencer
>
>Pfaff, Bernhard Dr. wrote:
>> Dear All,
>>
>>
>> one can visually inspect ARCH-effects by plotting acf/pacf of the
>> squared residuals from an OLS-estimation. This can be as simple as a
>> demeaned series. Further one can run an auxiliary regression by
>> regressing q lagged squared values and a constant on the 
>squared series
>> itself. This test statistic (N-q)*R^2 is distributed as chisq with q
>> degrees of freedom.  
>>
>> Something along the lines:
>>
>> archlmtest <- function (x, lags, demean = FALSE) 
>> {
>>   x <- as.vector(x)
>>   if(demean) x <- scale(x, center = TRUE, scale = FALSE)
>> lags <- lags + 1
>> mat <- embed(x^2, lags)
>> arch.lm <- summary(lm(mat[, 1] ~ mat[, -1]))
>> STATISTIC <- arch.lm$r.squared * length(resid(arch.lm))
>> names(STATISTIC) <- "Chi-squared"
>> PARAMETER <- lags - 1
>> names(PARAMETER) <- "df"
>> PVAL <- 1 - pchisq(STATISTIC, df = PARAMETER)
>> METHOD <- "ARCH LM-test"
>> result <- list(statistic = STATISTIC, parameter = PARAMETER, 
>> p.value = PVAL, method = METHOD, data.name =
>> deparse(substitute(x)))
>> class(result) <- "htest"
>> return(result)
>> }
>>
>> should work and yield equal results as mentioned earlier in 
>this thread.
>>
>> Best,
>> Bernhard
>>
>>
>>   
>>> Spencer,
>>>
>>> The warning message is sent from VAR, it basically lets you 
>>> know that the
>>> data it used had no column names and it had to supply them 
>>> using y1, y2, y3,
>>> etc. It can be suppressed by including options(warn=-1) in the 
>>> function.
>>>
>>> Anyway, it seems that the p value from my function does not match
>>> FinMetrics'. I guess the function doesn't work... hmm...
>>>
>>>
>>> On 2/2/08, Spencer Graves <[EMAIL PROTECTED]> wrote:
>>> 
>>>> Dear Tom:
>>>>
>>>>  Your revised function eliminates the discrepancy in the 
>>>>   
>>> degrees of
>>> 
>>>> freedom but is still very different from the numbers reports 
>>>>   
>>> on Tsay, p.
>>> 
>>>> 102:
>>>>
>>>> archTest(log(1+as.numeric(m.intc7303)), lag=12)
>>>>
>>>>ARCH test (univariate)
>>>>
>>>> data:  Residual of y1 equation
>>>> Chi-squared = 13.1483, df = 12, p-value = 0.3584
>>>>
>>>> Warning message:
>>>> In VAR(s, p = 1, type = "const") :
>>>> No column names supplied in y, using: y1, y2, y3, y4, y5, 
>y6, y7, y8,
>>>> y9, y10, y11, y12 , instead.
>>>>
>>>>
>>>>  TOM:  What can you tell me about the warning message?
>>>>
>>>>  Thanks for your help with this.
>>>>  Spencer Graves
>>>>
>>>> tom soyer wrote:
>>>>   
>>>>> Spencer,
>>>>>
>>>>> Sorry, I forgot that the default lag in arch is 16. Her

Re: [R] Spectral Analysis of Time Series in R

2008-12-03 Thread Pfaff, Bernhard Dr.
Hello Alexander,


for (3) see the CRAN-package "vars".

Best,
Bernhard

>
>Dear R Community,
>
>I am currently student at the Vienna University of Technology 
>writing my 
>Diploma thesis on causality in time series and doing some analyses of 
>time series in R. I have the following questions:
>
>(1) Is there a function in R to estimate the PARTIAL spectral 
>coherence 
>of a multivariate time series? If yes, how does this work? Is there an 
>test in R if the partial spectral coherence between two variables is 
>zero? The functions I know (spectrum, etc.) only work to estimate the 
>spectral coherence.
>
>(2) For some causality analysis I need an estimate of the 
>inverse of the 
>spectral density matrix of a multivariate time series. Is there any 
>possibility in R to get this? Actually, I would be happy if I could at 
>least get a functional estimate of the spectral density 
>matrix. I guess 
>this should work because R can plot the kernel density 
>estimator of the 
>spectral density, so it should be possible to extract the underlying 
>function estimate.
>
>(3) Is there any possibility to do Granger Causality in R? That means 
>fitting an VAR model and testing if some coefficients are zero.
>
>Thank you very much in advance!
>
>Best Regards,
>Alexander
>T
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vars package - specification of VAR

2008-12-08 Thread Pfaff, Bernhard Dr.
Hello Bernd,

by definition, a VAR does only include **lagged endogenous** variables.
You might want consider SVAR() contained in the same package, or fit a
VECM (see CRAN package 'urca').

Best,
Bernhard 

>Hi useRs,
>
>Been estimating a VAR with two variables, using VAR() of the 
>package "vars".
>
>Perhaps I am missing something, but how can I include the 
>present time t variables, i.e. for the set of equations to be:
>
>x(t) = a1*y(t) + a2*y(t-1) + a3*x(t-1) + ...
>Y(t) = a1*x(t) + a2*x(t-1) + a3*y(t-1) + ...
>
>The types available in function VAR() allow for seasonal 
>dummies, time trends and constant term.
>
>But the terms
>
>a1*y(t)
>a1*x(t)
>
>always seem to be excluded by default, thus only lagged 
>variables enter the right side.
>
>How can I specify VAR() such that a1*y(t) and a1*x(t) are included? 
>Or would I have to estimate with lm() instead?
>
>Many thanks in advance,
>
>Bernd
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] About adf.test

2008-12-08 Thread Pfaff, Bernhard Dr.
Hello Kamlesh,

have a look at: fUnitRoots, tseries, urca, uroot

Best,
Bernhard

>
>Dear sir,
>
>   I am a new user of R statistical package. I want to perform
>adf.test(augmented dickey fuller test), which packages I need 
>to install in
>order to perform it. I am getting following message on my monitor.
>*x<-rnorm(1000)
>> adf.test(x)
>Error: could not find function "adf.test"
>
>*I am waiting for your response.
>
>Kamlesh Kumar.
>
>-- 
>  Kamlesh Kumar
>  Appt. No. - QQ420,
>  Vila Universitaria, Campus de la UAB,
>  08193 Bellatera, Cerdanyola del Valles,
>  Barcelona, Spain.
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Cointegration and ECM in Package {urca}

2008-12-16 Thread Pfaff, Bernhard Dr.
>
>Dear R Core Team,
>
> 
>
>I am using package {urca} to do cointegration and estimate ECM model,
>but I have the following two problems:
>
> 
>
>(1)I use ca.jo() to do cointegration first and can get the
>cointegration rank, alpha and beta.  The next step is to test some
>restrictions on beta with blrtest(),bh5lrtest(), and bh6lrtest().  But
>none of them can add restrictions on all the cointegration equations at
>the same time if have more than one cointegration rank.  For example,
>there are three cointegration in my case.  I want to add three 
>different
>restrictions on them at the same time.  What can I do?

Dear Christine,

would you be so kind and give a more precise example of what you would
like to achieve? 
The examples in the above mentioned functions are replications of the
results in:

Johansen, S. and Juselius, K. (1990), Maximum Likelihood
 Estimation and Inference on Cointegration - with Applications to
 the Demand for Money, _Oxford Bulletin of Economics and
 Statistics_, *52, 2*, 169-210.


>
>(2)What I want to do is to estimate ECM model with imposing
>restriction on beta or on both alpha and beta at the same time.  It
>looks like that command cajo.test() can do this estimation.  
>It shows up
>in the package but there is no example there.  I tried to find some
>examples but I cannot find any even if I have read  the book 
>Analysis of
>Integrated and Cointegrated Time Series with R.  Can you show me how to
>use this command or some examples? 

There is no function cajo.test() contained in the package urca. Have you
meant ablrtest() instead? If so, have a look at the example and the
above given reference as well as:

Johansen, S. (1991), Estimation and Hypothesis Testing of
 Cointegration Vectors in Gaussian Vector Autoregressive Models,
 _Econometrica_, *Vol. 59, No. 6*, 1551-1580.

Best,
Bernhard

>
> 
>
>Thank you very much in advance.  Best wishes.
>
> 
>
>Christina
>
> 
>
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using R for large econometric models

2007-11-07 Thread Pfaff, Bernhard Dr.
Dear Dietrich,

in the first place, it would have been helpful to know which kind of 
econometric models your colleague wants to utilise. With respect to econometric 
methods you might want to have a look at the CRAN Task Views for econometrics 
and finance, to see what is already available:

http://cran.at.r-project.org/src/contrib/Views/Econometrics.html

http://cran.at.r-project.org/src/contrib/Views/Finance.html


>From a practical point of view, I have by now not encountered any size 
>limitations; these experiences are gained from working with high frequency 
>multivariate data in the finance context as well as simultaneous multiple 
>equation models that are solved on a per period basis by applying the 
>Gauß-Seidel algorithm; my colleagues are using R in the context of stock 
>selection where on a daily basis multivariate time series data of more than 
>3,000 strocks are processed. We have not encountered any problems so far. 
>Basically, the maximal data handling size is determined by your computer's 
>memory and the style of your coding.

Best,
Bernhard




>
>Dear helpeRs,
>
>a colleague of mine would like to give R a try.  He uses econometric
>models which typically involve a large number of variables, esp. time
>series.  Having no experience with handling very large data sets myself
>I turn to you.
>
>1. Could you please describe your experiences to cope with these
>   situations?
>
>2. What kind of difficulties will he have to face? Are there special
>   tricks (packages) he might try?
>
>3. Can you recommend to use R?
>
>
>Sorry, if my question is a bit vague but at this point I'm not able to
>give any further details.
>
>Any help is very much appreciated.
>
>D. Trenkler 
>
>-- 
>Dietrich Trenkler c/o Universitaet Osnabrueck 
>Rolandstr. 8; D-49069 Osnabrueck, Germany
>email: [EMAIL PROTECTED]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multivariate time series

2007-11-12 Thread Pfaff, Bernhard Dr.
Hello Giusy,

in addition to Frank's suggestion you might want to specify and estimate
a VECM (function ca.jo() in package urca). This object can be
transformed to its level-VAR representation (function vec2var() in
package vars) for which a predict-method exists (fan charts can be
generated too). The advantage of this approach compared to a pure
VAR-modeling (in levels or first differences, depending on the
stationarity of your series in question) is, that you might capture the
long-run relationship between your price series (arbitrage-condition?).

Best,
Bernhard

>
>You may want to have a look at the vars package
>Frank
>
>Giusy schrieb:
>> Hello to everyone!
>> I have a question for you..I need to predict multivariate 
>time series, for
>> example sales of 2 products related one to the other, having 
>the 2 prices
>> like inputs..
>> Is there in R a function to do it? I saw dse package but I 
>didn't find what
>> a I'm looking for..
>> Could anyone help me?
>> Thank you very much
>> Giusy
>>
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] vars package, impulse response functions ??

2007-09-13 Thread Pfaff, Bernhard Dr.
Hello Spencer,

impulse response analysis is wrong tool for your investigation. What you
are after is the final form of your model, i.e., the endogenous
variables are only dependent on your exogenous variables including
deterministic regressors: y_t = A(L)^-1 B(L) x_t. The key word is then
multiplicator analysis as used in the context of structural multiple
equation models. Depending on your objective you can then retrieve from
the final form of your model impact-, intermediate- and long-run
mutlipliers. This is outlined for instance in the monographs:

@book{BOOK,
author={George G. Judge and William E. Griffiths and R. Carter Hill and
Helmut L{\"u}tkepohl and Tsoung-Chao Lee},
title={The Theory and Practice of Econometrics (Wiley Series in
Probability and Statistics)},
year={1985},
price={$131.95},
publisher={Wiley},
isbn={047189530X}
} 

@book{BOOK,
author={George G. Judge and R. Carter Hill and William E. Griffiths and
Helmut L{\"u}tkepohl and Tsoung-Chao Lee},
title={Introduction to the Theory and Practice of Econometrics, 2nd
Edition},
year={1988},
publisher={Wiley},
isbn={0471624144}
} 

@book{BOOK,
author={Helmut L{\"u}tkepohl},
title={New Introduction to Multiple Time Series Analysis},
year={2007},
price={$49.68},
publisher={Springer},
isbn={3540262393}
} 

Now, to your problem at hand. You can retrieve the relevant coefficients
by using A() and B() and then you have to set up the final form by hand.
You can then compute the multipliers you are interest in. Please note,
that the VAR is estimated by OLS. You might want to consider estimating
the VAR by FGLS if you have restrictions in your VAR.

Best,
Bernhard

ps: The more you are "approaching" structural multiple equation model,
you can also use the CRAN-package systemfit 


>I am fitting a reduced form VAR model using VAR in the vars 
>library. I have
>several endogenous variables, and two exogenous variables. I 
>would like to
>explore the effects of a shock to one of the exogenous 
>variables on one of
>the endogenous variables. Using irf in the vars library only 
>calculates the
>irf for the endogenous variables, this is obviously by design, 
>is there some
>theoretical restriction on why it is not possible to look at 
>the irf's from
>exogenous shocks?  Is there anyway to look at the effects of exogenous
>shocks in R? Do I need to consider some sort of structural model?
>
>the following code sample illustrates what I am trying to do  and the
>problems I am having (I am not an econometrician, but I know 
>that e would be
>better left as an endo variable, I just needed some common 
>data to show what
>I am trying to do)
>
>
>data(Canada)
>attach(Canada)
>v.can<-VAR(Canada[,2:4],exogen=e, p = 2, type = "both")
>
>irf(v.can,impulse= "e", response="prod")
>
>thanks again,
>
>Spencer
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this mess...{{dropped}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] statistics - hypothesis testing question

2007-09-14 Thread Pfaff, Bernhard Dr.
Hello Mark,

in addition and complementing the already provided answers to your
question. You want to consider the J-test, too. For an outline and the
pitfalls of this test, see:

http://citeseer.ist.psu.edu/cache/papers/cs/24954/http:zSzzSzwww.econ.qu
eensu.cazSzfacultyzSzdavidsonzSzbj4-noam.pdf/bootstrap-j-tests-of.pdf


Best,
Bernhard 

>
>I estimate two competing simple regression models, A and B 
>where the LHS
>is the same in both cases but the predictor is different (
>I handle the intercept issue based on other postings I have seen ). I
>estimate the two models on a weekly basis over 24 weeks. 
>So, I end up with 24 RSquaredAs and 24 RsquaredBs, so essentally 2 time
>series of Rsquareds. This doesn't have to be necessarily 
>thought of as a
>time series problem but, is there a usual way, given the Rsquared data,
>to test 
>
>H0 : Rsquared B = Rsquared A versus H1 : Rsquared B > Rsquared A 
>
>so that I can map the 24 R squared numbers into 1 statistic. Maybe
>that's somehow equivalent to just running 2 big regressions over the
>whole 24 weeks and then calculating a statistic from those based on
>those regressions ?
>
>I broke things up into 24 weeks because I was thinking that the
>stability of the performance difference of the two models could be 
>examined over time. Essentially these are simple time series 
>regressions
>X_t = B*X_t-1 + epsilon so I always need to consider
>whether any type of behavior is stable.  But now I am thinking 
>that,  if
>I just want one overall number,  then maybe I should be considering all
>the data simultaneously ? 
>
>In a nutshell,  I am looking for any suggestions on the best 
>way to test
>whether Model B is better than Model A where
>
>Model A :  X_t = Beta*X_t-1 + epsilon
>
>Model B :  X_t = Betastar*Xstar_t-1 + epsilonstar
>
>
>Thanks fo your help.
>
>
>This is not an offer (or solicitation of an offer) to 
>buy/se...{{dropped}}
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this mess...{{dropped}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.