Farnoosh I want to start by appreciating your reply. Thank you so much.
I am waiting to know your plan why not give me pic
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.e
don’t
feel were fairly covered in the survey options, feel free to reply to me or
leave a comment here on my blog:
http://juliasilge.com/blog/Package-Search/
Thanks,
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
version of R should I use to make Rccp work?
Thank you in advance
Julia Edeleva
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE d
Hello All,
I’m Julia from Germany and I have a problem concerning the vegan package that I
can’t solve on my own (after hours and hours spent searching for a solution). I
was thrown into the topic of working with R by my professor and wasn’t really
aware that this included working with higher
.* Which statistical test is most appropriate?
Furthermore, I want to know whether one particular type of error is more
common in one experimental condition than in the other, i.e. test
whether *error
1 in condition 1 is more common than error 1 in condition 2*.
Thnx a lot
Julia Edeleva
*Compare diff
),
data = df.rus2, family = binomial)*
Thank you in advance.
Sincerely
Julia Edeleva
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do
{
sapply(y, function(y) {
integrate(function(x) exp(- kappa * (y - x)) * theta, a, y)$value
})
}, a, b)$value
)
Result: [1] 0.04535358
Thanks for helping!
On Sun, Feb 17, 2013 at 10:04 AM, Berend Hasselman wrote:
>
> On 17-02-2013, at 10:01, julia cafnik wrote:
>
&g
thank for your help. already solved it.
Cheers,
J.
On Sun, Feb 17, 2013 at 9:41 AM, Berend Hasselman wrote:
>
> On 16-02-2013, at 18:01, julia cafnik wrote:
>
> > Dear R-users,
> >
> > I'm wondering how to calculate this double integral in R:
> &
s aditional numbers after the result.
I know I could round it in this case.
But I am working with a large data set and need to always get the
correct result.
difftime() does not work correct either.
Has anybody a suggestion how to get the correct resu
lt to work with
them. If the option "prefix=FALSE" is used, then files won't be placed in a
subdirectory.
Thanks in advance for your help,
Julia
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
Unfortunately, I'm not really making any progresses, despite a lot of effort.
I've compiled R on Mac OS X for myself using MacPorts and the error is now
"state 28000, code 201" which is failed password authentification.
__
R-help@r-project.org mailing li
time I run Sweave on my file for update of minor
changes to the LaTeX text.
Is there an option implemented in Sweave allowing for avoidence of
recompilation of already existing figures?
Thanks in advance,
Julia
__
R-help@r-project.org mailing list
https
ease hint me towards additional diagnostics I can run to
> pin down the problem?
> > I have the latest versions of unixODBC and psqlODBC installed from
> Macports.
> > They seem to be okay, because
> > $isql dsn uid pwd
> > works fine to connect to the database.
> >
&g
d psqlODBC installed from Macports.
They seem to be okay, because
$isql dsn uid pwd
works fine to connect to the database.
Thanks in advance for your help,
Julia
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do
there any alternative way to find an estimate of the covariance matrix using
bootstrapping quantile regression?
Thanks a lot in advance.
All the best,
Julia
[[alternative HTML version deleted]]
__
R-help@r
k from
"666.1751" into "/mm/dd hh:mm:ss"
And I used this function:
d$Date <- ISOdatetime(2009, 1, 1, 0, 0, 0, tz = "GMT")+d$Date*(24*3600)
where "d" is the FPT output file (previously "loc").
THANKS, Julia
> Are you running
> i
l and also among the coefficients of
different quantile levels.
Thank you in advance!
All the best,
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/
uot;" to make it a character, it will still be character at
y<-4+b or when I use as.numeric, it will create NA
Thanks in advance,
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz
his doesn't:
testf<-function(data,column){print(data$column)}
Even though the first solution works, I would like to be able to insert the
columnname in the function, instead of the columnnumber. How do I do that?
Thank you in advance,
Julia
[[alternative HTML vers
I tried to use:
if (tau + h > 1)
stop("tau + h > 1: error in summary.rq")
But the Hall-Sheather bandwidth is very high because I also vary the number of
observations from 40 to 300.
Is there anyone that could help me?
Thanks in advance!
A
not work.
Is there any other command that gives me the varcov between all those
coefficients across quantiles?
Thanks a lot!
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
ht
Dear Michael,
Thank you very much for your suggestion.
It indeed worked!
All the best,
Julia
> CC: r-help@r-project.org
> From: michael.weyla...@gmail.com
> Subject: Re: [R] extract cov matrix in summary.rq and use as a matrix.
> Date: Mon, 5 Dec 2011 07:55:45 -0500
&
c,9 Numeric,9 Numeric,9
Is there any other way to do it?
Thanks a lot in advance!
All the best,
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz
estimation?
I saw suggestion related with the function Solve.QP, but I really did not
understand such method.
Thanks in advance,
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
r(beta(tau|I=2)-beta(tau|I=3)) is equal to NA.
Is there any way to compare those estimators?
Thank you very much!
Best regards,
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
ong different specifications, e.g.
rcs(x,4) against rcs(x,3). but it does not work.
Thanks for all suggestions!
Julia
> From: dwinsem...@comcast.net
> Date: Sat, 5 Nov 2011 13:42:34 -0400
> To: f.harr...@vanderbilt.edu
> CC: r-help@r-project.org
> Subject: Re: [R] linear against nonl
e two specification tests in this line:
anova.rq and Khmaladze.test. The first one test equality and significance of
the slopes across quantiles and the latter one test if the linear specification
is model of location or location and scale shift.
Do you have any suggestion?
Thanks a lot!
Best regards,
ite:
fitnl <- nlrq(y ~ exp(x), tau=0.5)
I have the following error: Error in match.call(func, call = cll) : invalid
'definition' argument
Is there any way to estimate this model, or should I accept the following
change:
fitnl <- rq(log(y) ~ x, tau=0.5) ?
Than
haven't see it yet.
Thanks a lot,
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
3.3
343
313
323
333
343
353
Thanks a lot! Hope this is the right place to post, if not, please tell me!
best,
Julia
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
P
th of the "-" sign
on to LaTex, like the warning is indicating.
Moreover, the font of the text in the graphics looks different from the
Computer Modern fonts LaTeX is using.
Thanks in advance for your answers,
Julia
__
R-help@r-project.org
ove in order to have this result?
Thanks a lot!
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the postin
the software says: "Error in Mhb0[, t] <- M[, (41 + t)]
- M[, (41 - t)] : subscript out of bounds"
What am I doing wrong?
Thanks a lot!
Julia
> Date: Wed, 13 Oct 2010 13:53:28 -0500
> From: er...@ccbr.umn.edu
> To: julia.l...@hotmail.co.uk
> CC: r-help@r-p
-M[,(42-t)]
}
Since I want the software to subtract from column (41+t) of matrix called M the
column (41-t), in such a way that the matrix Mhb0 will show me the result for
each t organized by columns.
Does anybody know what exactly I am doing wrong?
Thank
t)]-M[,(42-t)]
}
Since I want the software to subtract from column (41+t) of matrix called M the
column (41-t), in such a way that the matrix Mhb0 will show me the result for
each t organized by columns.
Does anybody know what exactly I am doing wrong?
Than
he
cloumn (41-t) of the same matrix M, such that the value of t varies according
to the vector cevenl above.
Why is this looping not working?
Thanks in advance!!!
Julia
[[alternative HTML version deleted]]
__
Thank you all for the explanation!
Best,
Julia
> Date: Thu, 7 Oct 2010 22:37:32 +1100
> Subject: Re: [R] quantile regression
> From: michael.bedw...@gmail.com
> To: martyn.b...@nag.co.uk
> CC: julia.l...@hotmail.co.uk; r-help@r-project.org
>
> Hi Julia,
>
&
,i] <- coef(qf05)
}
I am quite sure there is a mistake in the code:
qf05 <- rq(formula = mresultb[,i] ~ mresultx[,i], tau=0.5)
because it is just generating the coefficients for one simulation, not for 10
simulations.
best,
Julia
Date: Thu, 7 Oct 2010 18:51:40 +0800
Subject: Re
simulation, not
for each i.
Maybe this is a stupid question, but i am not so familiar with this software
yet.
Thanks in advance!
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org
the vector u in
one column. I also tried to increase the number of columns to 10 (or i), but
the matrix will have 10 times the same vector. And what I need is like a Monte
Carlo simulation, where I have to simulate 10 times the variables above.
Am I doing something wrong?
st of the original sentence.
The sentence should look like this: "The p value for my data was 0.2879 which
was not significant."
Thanks in advance.
Julia
Wassertemperaturen in Deutschland
Sommer, Sonne, Strand - wer braucht Abkühlung? Die aktuellen Wassertemperaturen
und Windgeschwindigk
Hello everybody out there using R,
When I try to run the command "R CMD Sweave file.Rnw" (R Version 2.11.1) on the
command line of Windows 7, an error message tells me that the command "sh" is
not known.
I suppose that R is trying to use a shell script, which can't be interpreted
using Windows.
it=TRUE,minModuleSize=1)
I then used the following to visualise the data:
> cut2colour<-labels2colors(cut2)
> plotDendroAndColors(dendro,cut2colour,"Dynamic Tree Cut",
> dendroLabels=FALSE,hang=0.03,addGuide=TRUE,guideHang=0.05)
Any advice or ideas would be much appreci
intervals.
Is this possible to do in R?
Best Regards,
Julia
<>__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, m
.
Julia
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self
A$Time=="96h"]/XmodA$IDPG[XmodA$Organ=="Blood" & XmodA$Time=="96h"],
na.rm=TRUE),
var(XmodA$IDPG[XmodA$Organ=="Tumor" &
XmodA$Time=="96h"]/XmodA$IDPG[XmodA$Organ=="Blood" & XmodA$Time=="96h"],
na.rm=TRUE), 3, 1, 1)), . .
ever value "host" has, it is simply ignored. Has
anyone of you got an idea what might be the reason for that?
Thanks in advance,
Julia
--
WM 2010: Top News, Spielpläne, Public Viewing-Termine, E-Cards und alles,
was der Fan sonst noch braucht, gibt´s im Sport-Channel auf arcor.de.
_
Hi Thomas,
Thanks very much for your reply. I used svd and it worked perfectly for my
purposes!
Thanks again,
Julia
--
View this message in context:
http://r.789695.n4.nabble.com/Data-reconstruction-following-PCA-using-Eigen-function-tp2226535p2229191.html
Sent from the R help mailing list
I'd really appreciate any advice you could offer... Thanks!
Julia
--
View this message in context:
http://r.789695.n4.nabble.com/Data-reconstruction-following-PCA-using-Eigen-function-tp2226535p2226535.html
Sent from the R help mailing list archive at Nabble.com.
Hello!
I used the function fracdiff(dn, nar=1, nma=1) and got the values of d, ar
and ma coefficients.
Also another coefficients were get under fdGPH, fdSperio.
How could I get the forecasts in these models?
Thank you very much
[[alternative HTML version deleted]]
__
Hello,
I have a question about forecast under model arima(1,1,1).
I construct this model on 1000 observations and find the forecast for
following, for example, 100 observations.
But it' s necessary for me to get the predicted values of the previous 100
observations and compare it with actual value
distr for the first time.
Regards
Julia
Only a man of Worth sees Worth in other men
[[alternative HTML version deleted]]
__
R
Windows).
Many many thanks in advance,
- Julia
--
Julia Uitz
Scripps Institution of Oceanography
University of California San Diego
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://
Anderson Darling test results.
Please guide
Julia
Only a man of Worth sees Worth in other men
[[alternative HTML version deleted
pper limit.
My question is
(1) Is there any R package which helps to estimate the parameters of "various"
Truncated distributions?
(2) How to fit the truncated distributions to loss data in the sense how do we
use KS and AD tests?
Extremely sorry for writing such a long m
out
whetehr my fit is correct or not.
Thanking you in advance
Regards
Julia Cains, Brisbane
**
Only a man of Worth sees Worth in other men
**
[[al
Hello!
I have a dataset with the dates and positions - Lat and Long columns
How do I make a scatter plot?
Do I have to cbind to combine the Lat and Long columns?
Any suggestions will be much appreciated!
Thank you
Julia
[[alternative HTML version deleted
Hi, Sorry, it's ok I've figured it out using the as.matrix function!
Cheers,
Julia.
From: Scot W. McNary [smcn...@charm.net]
Sent: 23 December 2009 15:41
To: Julia Myatt
Cc: r-help@r-project.org
Subject: Re: [R] Cohen's kappa, unequ
the data into
R directly?
Sorry, I really am a beginner!
Thanks again,
Julia.
From: Scot W. McNary [smcn...@charm.net]
Sent: 23 December 2009 15:41
To: Julia Myatt
Cc: r-help@r-project.org
Subject: Re: [R] Cohen's kappa, unequal score ranges
Juli
as to what I am doing wrong? I'm afraid I'm very new to R so don't even
know the basics!!
Thanks for all your help,
Julia.
From: Jim Lemon [...@bitwrit.com.au]
Sent: 23 December 2009 02:15
To: Julia Myatt
Cc: r-help@r-project.org
Subject: R
not sure what this means or what I should do!!
Any help in this area would be much appreciated, or anything about the best way
to deal with inter-observer reliability (my data is all categorical),
Thanks,
Julia.
__
R-help@r-project.org mail
#x27;m not sure if
I'm trying to do the right thing, or if it's my data it doesn't like! Any
help with this problem would be much appreciated, just a point in the right
direction, or another study that has had to deal with this kind of data!
Thanks,
Julia.
--
View this messa
.
Regards
Julia
Only a man of Worth sees Worth in other men
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
However, when I try to apply the code given by you, its showing the error
"plot.new has not been called yet"
Sir, is it that the command works only with plot command and not with barchart.
Warm regards
Julia
***
readability.
Thanking in advance
Julia
Only a man of Worth sees Worth in other men
[[alternative HTML version deleted]]
__
R-help@r
s and regards
Julia
Only a man of Worth sees Worth in other men
[[alternative HTML version deleted]]
__
R-help@r-project.org maili
Dear Uwe Ligges,
Sir, I am extremely sorry for not mentioning about the windows environment. I
am new to R and trying to learn R besides having to attend my regular
commitments. Thanks a lot for your guidance again Sir.
Regards
Julia
R
2.10, I don't wish to reinstall packages like lmom, lmomco, quantreg,
YieldCurve etc.
Please guide
Julia
Only a man of Worth sees Worth in other men
[[alternative HTML
trying to learn it as early as possible.
Extremely sorry for this stupid question.
Please guide
Julia
Only a man of Worth sees Worth in other men
[[alternativ
the subset of maturity e.g.
maturity_sub <- c(30, 60, 90)
Thanking you all in advance
Regards
Julia
Only a man of Worth sees Worth in other men
[[alternative HTML
having polynomial in t.
I am not that good in stats as well as in mathmatics.
I request you to kindly help me as to how to express the 'y' in polynomial in
terms of t.
Thanking you in advance
Julia
Only a man of Worth sees Wo
I am Julia Cains from Brisbane. This is my
first mail to this group and I have recently started learning the R language.
Â
I am trying to learn the smoothening
of the yield curve. However, I came across the CRAN package â âYieldCurveâ
meant for Modelling and estimation of the yield
Achim Zeileis wrote:
Julia:
I'm trying now to apply the package strucchange to see whether there
is a structural change in linear regression. I have noted the
following problem that arises in my case with recursive-based CUSUM:
generic function recresid() in efp() generates an error,
lised inverse, ginv())?
Thank you in advance for your help!
Julia
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal,
Hi,
I would like to operate on certain columns in a dataframe, but not
others. My data looks like this:
x1 x2 x3
1 2 3
4 5 6
7 8 9
I want to create a new column named x4 that is the sum of x1 and x2,
but NOT x3. I looked at colSums and apply, but those functions seem to
use all the c
My data files have an equal number of rows or spatial locations. Any
suggestions would be useful.
Also, is there a way to add multiple covariates into the trend.spatial
function?
Thanks,
*****
Julia L. Angstmann
Department of Botany
University of Wyoming
1000 E
Hello,
My name is Julia and I'm doing my phd on roc analysis.
I'm trying to write a maximization function for the likelihood attached in
the document.
For some reason it's not working I keep getting \this error:
Error: unexpected symbol in:
"+log(v_pred))
return"
Hello,
My name is Julia and I'm doing my phd on roc analysis.
I'm trying to write a maximization function for the likelihood attached in
the document.
For some reason it's not working I keep getting \this error:
Error: unexpected symbol in:
"+log(v_pred))
return"
Hi Peter,
thanks a lot for your help. Very much appreciated.
Cheers,
Julia
Peter Dalgaard wrote:
>
> Julia S. wrote:
>> Hi there,
>>
>> thanks for your help. I did read Bates statement several times, and I am
>> very glad and thankful that many statisticians s
like me before and may have a convincing line
for a referee at hands. I have problems reformulating what I read here in my
own words.
Dieter: when you write:
"but to use lme instead when possible" do you mean that when using lme the
F-stats are correct? Because I assumed that the problem woul
ask for an explanation when somebody
doesn't understand something? I've learned that asking is a good way of
learning new things. Sorry if that offended you.
Confused,
Julia
Cheers,
Julia
--
View this message in context:
http://www.nabble.com/lme-and-lmer-df%27s-and-F-statistics-ag
Dear R-users,
I did do a thorough search and read many articles and forum threads on the
lme and lmer methods and their pitfalls and problems. I, being not a good
statistician but a mere "user", came to the conclusion that the most correct
form of reporting statistics for a mixed linear model wou
alue Seg
111 5 2
111 6 2
111 2 2
178 7 4
178 3 4
138 3 1
138 8 1
138 7 1
138 6 1How to do this? Thank you so much for the help.
Sincerely
Julia
--- On Thu, 9/11/08, Adaikalavan Ramasamy <[EMAIL PROTECTED]> wrote:
y data[, 12], hang=0,cex=0.7,
main="Euclidean/Ward",
ylab="Distance", xlab="AIN", sub =" ")
can anyone help??
regards,
julia
--
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten
__
R-help@r-project.org mail
Hey there!
I am searching for an attribute evaluation algorithm (such as based on Info
gain, Gain ratio or Chi squared statistics).
Is something like that available in R?
Thanks for your reply.
Best regards,
Julia
--
__
R-help@r-project.org mailing
thx for your help,
i checked the caret package out and the tuning works. but i can't find a way to
make a contingency table in order to see the classification result.
e.g. like:
table(outcome NaiveBayes, mydata$code)
Is there something like that?
Julia
Original-Nach
tune.control)
thx for your help!
cheers, julia
--
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
lumns than rows) and therefore get
an error message.
Any ideas what to do?
Thx for your help,
I really appreciate it!
Julia
Original-Nachricht
> Datum: Fri, 12 Oct 2007 23:38:01 +0300
> Von: "Kenn Konstabel" <[EMAIL PROTECTED]>
> An: "Julia Kr
Hallo!
Is there a package in R that does Q-type factor analysis?
I know how to do principal component analysis, but haven't found any
application of Q-type factor analysis.
Thx,
Julia
--
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/
Hey there!
I would like to justify the stability of the cluster of a subset of my data by
comparing it to another cluster of another subset. Does there exist a
quantitative similarity measure that can be applied?
I am open for any suggestions,
thx for your help,
Julia
puted only on the patterns of subi.
12: end for
13: end for
I am glad about any help, don't really know what to do!
thanks, regards
Julia
--
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
outcomes can be different)
But is there a way to stabilize the cluster (meaning finding the one cluster
that appears the most often in 10 trials)?
Thank you for any ideas,
Julia
--
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman
Hallo!
I would need a code for 10-fold cross validation for the classifiers Naive
Bayes and svm (e1071) package. Has there already been done something like that?
I tried to do it myself by applying the tune function first:
library(e1071)
tune.control <- tune.control(random =F, nrepeat=1,
repea
93 matches
Mail list logo