Hello, Derek,
first of all, be very aware of what David Winsemius said; you are about to
enter the area of "unprincipled data-mining" (as he called it) with its
trap -- one of many -- of multiple testing. So, *if* you know what the
consequences and possible remedies are, a purely R-syntactic "
Il giorno Thu, 5 May 2011 18:42:11 -0700 (PDT)
Frank Harrell ha scritto:
> Hi Marco,
>
> You're welcome.
>
> The number at risk at given time points is a fairly standard thing to
> add to survival plots.
I know, but last year, as a "newbye" in biostatistics, i felt the need
to read rms book e
poLCA is an option, as is randomLCA, flexmix, and depmixS4, and there are
likely to be more
What specific models are you interested in?
Best, Ingmar
On Fri, May 6, 2011 at 6:13 AM, Wincent wrote:
> I guess LEM is a software for latent class analysis. If so, you may
> want to have a look at poLCA
Dear Dr ;
I am a PhD student at Epidemiology department of National University of
Singapore. I used R command (rcspline.plot) for plotting restricted cubic
spline – the model is based on Cox. I managed to get a plot without
adjustment for other covariates, but I have a p
Hi,
I'm hoping someone can offer some advice:I have a matrix "x" of dimensions 160
by 1. I need to create a matrix "y", where the first 7 elements are equal
to x[1]^1/7, then the next 6 equal to x[2]^1/6, next seven x[3]^1/7 and so on
all the way to the 1040th element. I have implemente
Hi,
I use datadist fonction in rms library in order to draw my nomogram. After
reading, I try this code:
f<-lrm(Y~L+P,data=donnee)
f <- lrm(Y~L+P,data=donnee)
d <- datadist(f,data=donnee)
options(datadist="d")
f <- lrm(Y~L+P)
summary(f,L=c(0,506,10),P=c(45,646,10))
plot(Predict(
> which is the maximum large of digits that R has?, because SQL work
> with 50 digits I think. and I need a software that work with a lot
> of digits.
The .Machine() command will provide some insight into these matters.
cu
Philipp
--
Dr. Philipp Pagel
Lehrstuhl für Genomorientierte Bi
On Fri, May 06, 2011 at 02:28:57PM +1000, andre bedon wrote:
>
> Hi,
> I'm hoping someone can offer some advice:I have a matrix "x" of dimensions
> 160 by 1. I need to create a matrix "y", where the first 7 elements are
> equal to x[1]^1/7, then the next 6 equal to x[2]^1/6, next seven x[3]^
Hello Lee,
in addition to David's answer, see: ?MacKinnonPValues in package 'urca' (CRAN
and R-Forge).
Best,
Bernhard
> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] Im Auftrag von David Winsemius
> Gesendet: Freitag, 6. Mai 2011
Hi Danielle.
You appear to have two problems:
1) getting the data into R
Because I don't have the file at hand, I'm going to simulate reading it
through a text connection
orgdata<-textConnection("Graph ID | Vertex1 | Vertex2 | weight\n1 | Alice |
Bob | 2\n1 | Alice | Chris | 1\n1 | Alice | Jane |
Hi,
sorry for the late response and many thanks. A combination of get() and
paste() did the job.
Regards
On Thu, Apr 28, 2011 at 5:06 PM, Petr PIKAL wrote:
> Hi
>
> r-help-boun...@r-project.org napsal dne 28.04.2011 16:16:16:
>
> > ivan
> > Odeslal: r-help-boun...@r-project.org
> >
> > 28.04.
On 05/05/2011 09:50 PM, matibie wrote:
I'm trying to add the exact value on top of each column of an Histogram, i
have been trying with the text function but it doesn't work.
The problem is that the program it self decides the exact value to give to
each column, and ther is not like in a bar-plot
On 05/05/2011 10:48 PM, pcc wrote:
This is probably a very simple question but I am completely stumped!I am
trying to do shapiro.wilk(x) test on a relatively small dataset(75) and each
time my variable and keeps coming out as 'NULL', and
shapiro.test(fcv)
Error in complete.cases(x) : no input
Hi,
I did the similar experiment with my data. may be following code will give
you some idea. It might not be the best solution but for me it worked.
please do share if you get other idea.
Thank you
CODE###
library(dismo)
set.seed(111)
dd<-read.delim("yourfile.csv",sep=",",header=T)
Hello,
Thank you for your reply but I'm not sure your code answers my needs,
from what I read it creates a 10-fold partition and then extracts the
kth partition for future processing.
My question was rather: once I have a 10-fold partition of my data,
how to supply it to the "train" function of t
An alternative approach:
library(fdth)
fd <- fdt(rnorm(1e3, m=10, sd=2))
plot(fd)
breaks <- with(fd, seq(breaks["start"], breaks["end"], breaks["h"]))
mids <- 0.5 * (breaks[-1] + breaks[-length(breaks)])
y <- fd$table[, 2]
text(x=mids, y=y,
lab=y,
pos=3)
HTH,
JCFaria
--
View this mess
Thank's a lot
I owe you all 10 points of my grade!!
--
View this message in context:
http://r.789695.n4.nabble.com/Insert-values-to-histogram-tp3498140p3502017.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
Dear R-users,
I am trying to run sensitivity and uncertainty analysis with R using the
following functions :
- samplingSimple from the package SMURFER
- morris from the package sensitivity
I have a different problem for each of these two functions:
- the functi
G'day Rolf,
On Fri, 06 May 2011 09:58:50 +1200
Rolf Turner wrote:
> but it's strange that the dodgey code throws an error with gam(dat1$y
> ~ s(dat1$x)) but not with gam(dat2$cf ~ s(dat2$s))
> Something a bit subtle is going on; it would be nice to be able to
> understand it.
Well,
R> trac
Dear R-help,
I am trying to reproduce some results presented in a paper by Anderson
and Blundell in 1982 in Econometrica using R.
The estimation I want to reproduce concerns maximum likelihood
estimation of a singular equation system.
I can estimate the static model successfully in Stata but for t
I think those functions are now defunct (were only available in previous
versions).
S
On Thursday, May 5, 2011 at 6:33 PM, Andrew Robinson wrote:
> Hi Arnau,
>
> please send the output of sessionInfo() and the exact commands and
> response that you used to install and load apTreeshape.
>
> Ch
Please post the entire script next time, e.g., include require(rms). You
have one line duplicated. Put this before the first use of lrm: d <-
datadist(donnee); options(datadist='d')
Frank
Komine wrote:
>
> Hi,
> I use datadist fonction in rms library in order to draw my nomogram.
> After rea
Please follow the posting guide. You didn't state which package you are
using and didn't include a trivial self-reproducing example that causes the
error.
For your purpose the rms package is going to plot restricted cubic spline
fits (and shaded confidence bands) more flexibly.
Frank
Haleh G
Hi,
I have tried to use uniroot to solve a value (value a in my function) that
gives f=0, and I repeat this process for 1 times(stimulations). However
error occures from the 4625th stimulation - Error in uniroot(f, c(0, 2),
maxiter = 1000, tol = 0.001) :
f() values at end points not of oppo
Taking the final value for each 30-minute interval seems like it would get what
I want. The problem is that sometimes this value would be 15 minutes before the
end of the 30-minute interval. What would I use to pick up this value?
-Original Message-
From: ehl...@ucalgary.ca [mailto:ehl..
On May 6, 2011, at 4:03 AM, Philipp Pagel wrote:
which is the maximum large of digits that R has?, because SQL work
with 50 digits I think.
I am wondering if that is binary or decimal.
and I need a software that work with a lot
of digits.
The .Machine() command will provide some insight
You should ask your instructor or teaching assistant for help. R-help
is not for doing homework.
Duncan Murdoch
On 06/05/2011 9:00 AM, CarJabo wrote:
Hi,
I have tried to use uniroot to solve a value (value a in my function) that
gives f=0, and I repeat this process for 1 times(stimulatio
On Fri, Apr 29, 2011 at 4:27 PM, mathijsdevaan wrote:
> Hi list,
>
> Can anyone tell my why the following does not work? Thanks a lot! Your help
> is very much appreciated.
>
> DF = data.frame(read.table(textConnection(" B C D E F G
> 8025 1995 0 4 1 2
> 8025 1997 1 1 3 4
> 8026
On May 6, 2011, at 4:24 AM, Petr Savicky wrote:
On Fri, May 06, 2011 at 02:28:57PM +1000, andre bedon wrote:
Hi,
I'm hoping someone can offer some advice:I have a matrix "x" of
dimensions 160 by 1. I need to create a matrix "y", where the
first 7 elements are equal to x[1]^1/7, then t
On Fri, May 06, 2011 at 09:17:11AM -0400, David Winsemius wrote:
>
> On May 6, 2011, at 4:03 AM, Philipp Pagel wrote:
> >The .Machine() command will provide some insight into these matters.
>
> On my device (and I suspect on all versions of R) .Machine is a
> built-in list and there is no .Machin
sorry I am not asking someone to do my homework, as I have finished all the
procedure. I am just wondering why this technical error occurs, so I can fix
it myself.
By the way i don't have any instructor or teaching assistant for help, so
any suggestion for the error will be appreciated.
Thanks very
Thank you very much for the reply. I tend to agree with your first
suggestion. And that's exactly what I did.
In other functions, an easier way to marginalize such a variable C (not
necessarily a factor) is to use the option
include=c("A","B","A:B")
This essentially sets C at a value such that
Dear R Community,
I am currently facing this seemingly obscure Problem with Panel
Corrected Standard Errors (PCSE) following Beck & Katz (1995). As the
authors suggest, I regressed a linear model (tmodel) with lm() with
option "na.action=na.exclude" (I have also tried other options here). My
d
Hello
I'm interested in Long Term Prediction over time series: regarding it, I and
other guys have developed STRATEGICO, a free and opensource tool at http://code.
google.com/p/strategico/
Please have a look at it, test it online with your own time series and give us
any feedbacks and suggestio
On Fri, May 6, 2011 at 6:33 AM, CarJabo wrote:
> sorry I am not asking someone to do my homework, as I have finished all the
> procedure. I am just wondering why this technical error occurs, so I can fix
> it myself.
My guess would be it has something to do with the random data
generated at the 4
On Fri, May 06, 2011 at 02:28:57PM +1000, andre bedon wrote:
>
> Hi,
> I'm hoping someone can offer some advice:I have a matrix "x" of dimensions
> 160 by 1. I need to create a matrix "y", where the first 7 elements are
> equal to x[1]^1/7, then the next 6 equal to x[2]^1/6, next seven x[3]^
On 05.05.2011 21:20, Ray Brownrigg wrote:
On 6/05/2011 6:06 a.m., swaraj basu wrote:
Dear All,
I am trying to build a package for a set of functions. I am
able to build the package and its working fine. When I check it with
R CMD check
I get a following warning : no visible global function
de
Hi all! I'm getting a model fit from glm() (a binary logistic regression
fit, but I don't think that's important) for a formula that contains powers of
the explanatory variable up to fourth. So the fit looks something like this
(typing into mail; the actual fit code is complicated because it
Hi Ben,
>From what you have written, I am not exactly sure what your
seat-of-the-pant sense is coming from. My pantseat typically does not
tell me much; however, quartic trends tend to less stable than linear,
so I am not terribly surprised.
As two side notes:
x_qt <- x^4 # shorter code-wise
an
The strsplit function is probably the closest R function to perls split
function. For more detailed control the gsubfn package can be useful.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Gamliel Beyderman
Sent: Thursday, May 05
Will all the keywords always be present in the same order? Or are you looking
for the keywords, but some may be absent or in different orders?
Look into the gsubfn package for some tools that could help.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-proj
On May 6, 2011, at 11:35 AM, Ben Haller wrote:
Hi all! I'm getting a model fit from glm() (a binary logistic
regression fit, but I don't think that's important) for a formula
that contains powers of the explanatory variable up to fourth. So
the fit looks something like this (typing into
Hello all,
I'm trying to create a heatmap using 2 matrices I have: z and v. Both
matrices represent different correlations for the same independent
variables. The problem I have is that I wish to have the values from matrix
z to be represented by color intensity while having the values from matri
> From what you have written, I am not exactly sure what your
> seat-of-the-pant sense is coming from. My pantseat typically does not
> tell me much; however, quartic trends tend to less stable than linear,
> so I am not terribly surprised.
My pantseat is not normally very informative either, b
Gabor Grothendieck wrote:
>
> On Tue, Dec 7, 2010 at 11:30 AM, Pete Pete
> wrote:
>>
>> Hi,
>> consider the following two dataframes:
>> x1=c("232","3454","3455","342","13")
>> x2=c("1","1","1","0","0")
>> data1=data.frame(x1,x2)
>>
>> y1=c("232","232","3454","3454","3455","3
Hello,
I'm running version R x64 v2.12.2 on a 64bit windows 7 PC. I have two data
vectors, x and y, and try to run archmCopulaFit. Most of the copulas produce
errors. Can you tell me what the errors mean and if possible, how I can set
archmCopulaFit options to make them run? I see in the do
I'm trying to create an xyplot with a "groups" argument where the y-variable
is the cumsum of the values stored in the input data frame. I almost have
it, but I can't get it to automatically adjust the y-axis scale. How do I
get the y-axis to automatically scale as it would have if the cumsum value
Hmmm
After reading that email four times, I think I see what you mean.
Checking for variables within particular scopes is probably one of the most
challenging things in R, and I would guess in other languages too. In R
it's compounded by situations when you're writing a function to accept
va
On May 6, 2011, at 12:31 PM, David Winsemius wrote:
> On May 6, 2011, at 11:35 AM, Ben Haller wrote:
>
>> Hi all! I'm getting a model fit from glm() (a binary logistic regression
>> fit, but I don't think that's important) for a formula that contains powers
>> of the explanatory variable up to
On Fri, 2011-05-06 at 11:20 -0500, Gene Leynes wrote:
> Hmmm
>
> After reading that email four times, I think I see what you mean.
>
> Checking for variables within particular scopes is probably one of the most
> challenging things in R, and I would guess in other languages too. In R
> it's
Here is an example of what I would like to do:
meas = measurements
times = time of measurement
measf = measurements in final, reduced matrix
timesf = time of measurement in final matrix
meas<-runif(30)
times<-sort(runif(30))
inputmat<-cbind(times,meas)
names(inputmat)<-c("timef","measf")
I would
FWIW:
Fitting higher order polynomials (say > 2) is almost always a bad idea.
See e.g. the Hastie, Tibshirani, et. al book on "Statistical
Learning" for a detailed explanation why. The Wikipedia entry on
"smoothing splines" also contains a brief explanation, I believe.
Your ~0 P values for the
Thanks to all that reply to my post. The best solution that answers
entirely to my question and can be used as a general function and not
case by case is the one sent by the package author.
Many thanks to everybody. It was helpful.
Cristina
On 05/05/2011 10:44, Deepayan Sarkar wrote:
On Wed,
On May 6, 2011, at 11:35 AM, Pete Pete wrote:
Gabor Grothendieck wrote:
On Tue, Dec 7, 2010 at 11:30 AM, Pete Pete
wrote:
Hi,
consider the following two dataframes:
x1=c("232","3454","3455","342","13")
x2=c("1","1","1","0","0")
data1=data.frame(x1,x2)
y1=c("232","232"
On Fri, 6 May 2011, Bert Gunter wrote:
FWIW:
Fitting higher order polynomials (say > 2) is almost always a bad idea.
See e.g. the Hastie, Tibshirani, et. al book on "Statistical
Learning" for a detailed explanation why. The Wikipedia entry on
"smoothing splines" also contains a brief explanat
The following code works mostly. It runs fine but...
1. Is there a way to increment the xlab for each graph? I would like to have
Graph 1, Graph 2, etc. Right now it just gives me Graph i over and over
again.
2. Is there a way to get the x-axis and y-axis to be bold or at least a
darker color?
Hello
I am a new user of R .
and I ve problem with R and netcdf .
I succed installation , I could use all examples .
But when I take my netcf it is different .
I want to do statistic on this kind of file .
1)
first calculate mean .
my data is like that
through ncdump -h test.nc
netcdf test {
di
This should work!!
for(i in 1:12){
xLabel <- paste("Graph",i)
plotTitle <- paste("Graph",i,".jpg")
jpeg(plotTitle)
print(hist(zNort1[,i], freq=FALSE, xlab=xLabel, col="blue",
main="Standardized Residuals Histogram", ylim=c(0,1), xlim=c(-3.0,3.0)),axes
= FALSE)
axis(1, col = "blue",col.axis = "bl
1. ?paste ?sprintf
2. ?par (look at col.axis) ?axis
3. ?pdf ?png ?dev.copy
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org]
On May 6, 2011, at 1:11 PM, wwreith wrote:
The following code works mostly. It runs fine but...
1. Is there a way to increment the xlab for each graph? I would like
to have
Graph 1, Graph 2, etc. Right now it just gives me Graph i over and
over
again.
Use the power of bquote. See modif
Hello All,
Let's say I have data spanning all quadrants of x-y plane. If I plot data
with a certain x and y range using xlim and ylim or by using plot.formula as
described in this link:
http://www.mathkb.com/Uwe/Forum.aspx/statistics/5684/plotting-in-R
*DF <- data.frame(x = rnorm(1000), y = rnorm(
Don't attach the Design package. Use only rms. Please provide the output of
lrm (print the f object). With such a strong model make sure you do not
have a circularity somewhere. With nomogram you can specify ranges for the
predictors; default is 10th smallest to 10th largest.
rms will not make
On May 6, 2011, at 2:28 PM, Pavan G wrote:
Hello All,
Let's say I have data spanning all quadrants of x-y plane. If I plot
data
with a certain x and y range using xlim and ylim or by using
plot.formula as
described in this link:
http://www.mathkb.com/Uwe/Forum.aspx/statistics/5684/plotting
Hi again everybody
I have I new problem concerning the editor of R. It is possible to add a
new variable column, but they all have the name "var1".I read somewhere that
it should be possible to change the variable name by clicking on it, but
that doesn't work. Is that a bug or how is it possible
Hi all R users,
Thanks Frank for your advices
In fact I posted all my script. In the R Help, the script for nomogram is
long and I took only the part what I think interesting in my case.
I use informations from( datadist {Design} and rms {rms}) in the help of R
software to do my code.
I see tha
In Matlab, an array can be created from 1 - 30 using the command similar to R
which is 1:30. Then, to make the array step by 0.1 the command is 1:0.1:30
which is 1, 1.1, 1.2,...,29.9,30. How can I do this in R?
-
In theory, practice and theory are the same. In practice, they are not - Albert
I can get around it by doing something like:
as.matrix(rep(1,291))*row(as.matrix(rep(1,291)))/10+.9
I was just hoping for a simple command.
Schatzi wrote:
>
> In Matlab, an array can be created from 1 - 30 using the command similar
> to R which is 1:30. Then, to make the array step by 0.1 the c
On Fri, May 06, 2011 at 12:11:30PM -0700, Schatzi wrote:
> In Matlab, an array can be created from 1 - 30 using the command similar to R
> which is 1:30. Then, to make the array step by 0.1 the command is 1:0.1:30
> which is 1, 1.1, 1.2,...,29.9,30. How can I do this in R?
> ...
This may well be a
?seq
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Schatzi
> Sent: Friday, May 06, 2011 1:12 PM
> To: r-hel
On Fri, May 6, 2011 at 12:11 PM, Schatzi wrote:
> In Matlab, an array can be created from 1 - 30 using the command similar to R
> which is 1:30. Then, to make the array step by 0.1 the command is 1:0.1:30
> which is 1, 1.1, 1.2,...,29.9,30. How can I do this in R?
Hmm, in this case, I would do it
Beautiful.
-Original Message-
From: greg.s...@imail.org [mailto:greg.s...@imail.org]
Sent: Friday, May 06, 2011 02:17 PM
To: Thompson, Adele - adele_thomp...@cargill.com; r-help@r-project.org
Subject: RE: [R] create arrays
?seq
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
In
Hi Matthias,
What do you mean by "that doesn't work"? What platform are you using? Using:
> sessionInfo()
R version 2.13.0 (2011-04-13)
Platform: x86_64-pc-mingw32/x64 (64-bit)
fix(mydataframe)
brings up the data editor, then if I click a variable name, and change
it and shut down the data ed
Hello!
I'd like to take Dates and extract from them months and years - but so
that it sorts correctly. For example:
x1<-seq(as.Date("2009-01-01"), length = 14, by = "month")
(x1)
order(x1) # produces correct order based on full dates
# Of course, I could do "format" - but this way I am losing t
On Fri, May 6, 2011 at 4:07 PM, Dimitri Liakhovitski
wrote:
> Hello!
>
> I'd like to take Dates and extract from them months and years - but so
> that it sorts correctly. For example:
>
> x1<-seq(as.Date("2009-01-01"), length = 14, by = "month")
> (x1)
> order(x1) # produces correct order based o
Hello all
I have a geology map that has three level, bellow
<-geology
lithology landscape landform
landform level is used as covariate (with codes=1,2,3,4,5) for training of
neural network, but this level has missing data as NA.
I want to replace the missing data of landform level wi
Dear All
I trained a neural network for 200 data and I did prediction for a grid file
(e.g. 100 points) such as below:
snn<-predict(nn, newdata=data.frame(wetness=wetnessgrid$band1,
ndvi=ndvigrid$band1))
the pixels of snn is same with wetnessgrid or ndvigrid
I want to convert thi
On May 6, 2011, at 1:58 PM, Prof Brian Ripley wrote:
> On Fri, 6 May 2011, Bert Gunter wrote:
>
>> FWIW:
>>
>> Fitting higher order polynomials (say > 2) is almost always a bad idea.
>>
>> See e.g. the Hastie, Tibshirani, et. al book on "Statistical
>> Learning" for a detailed explanation why.
Hi,
If your geology map is a special kind of object, this may not work,
but if you are just dealing with a data frame or matrix type object
named, "geology" with columns, something like this ought to do the
trick:
geology[is.na(geology[, "landform"]), "landform"] <- 0
?is.na returns a logical ve
I figured out a poor way to do what I want.
meas<-runif(30)
times<-sort(runif(30))
timesdec<-seq(0,1,0.2)
ltim<-length(timesdec)
storing<-rep(0,ltim)
for (i in 1:ltim) {
if (i=1) {rowstart=1} else {rowstart<-findInterval(timesdec[i-1],times)+1}
rowfinal<-findInterval(timesdec[i],times)
storing[i]
On May 6, 2011, at 4:16 PM, Ben Haller wrote:
As for correlated coefficients: x, x^2, x^3 etc. would obviously be
highly correlated, for values close to zero.
Not just for x close to zero:
> cor( (10:20)^2, (10:20)^3 )
[1] 0.9961938
> cor( (100:200)^2, (100:200)^3 )
[1] 0.9966219
Is th
On May 6, 2011, at 12:14 PM, Lee, Eric wrote:
Hello,
I'm running version R x64 v2.12.2 on a 64bit windows 7 PC. I have
two data vectors, x and y, and try to run archmCopulaFit. Most of
the copulas produce errors. Can you tell me what the errors mean
and if possible, how I can set arch
Perfect - that's it, Gabor, thanks a lot!
Dimitri
On Fri, May 6, 2011 at 4:11 PM, Gabor Grothendieck
wrote:
> On Fri, May 6, 2011 at 4:07 PM, Dimitri Liakhovitski
> wrote:
>> Hello!
>>
>> I'd like to take Dates and extract from them months and years - but so
>> that it sorts correctly. For examp
Hi Matthias,
If you know the column number you want to change, it is pretty straightforward.
## use the builtin mtcars dataset as an example
## and store it in variable, 'x'
x <- mtcars
## change the second column name to "cylinder"
colnames(x)[2] <- "cylinder"
## compare the column names of 'x'
Some good suggestions, just (as always) be aware of floating-point imprecision.
See FAQ 7.31
> s <- seq(1,30,0.1)
> s[8]
[1] 1.7
> s[8] == 1.7
[1] FALSE
Just trying to forestall future questions :-)
Dan
Daniel Nordlund
Bothell, WA USA
> -Original Message-
> From: r-help-boun...@r-proj
How to use the package generalized hyperbolic distribution in order to
estimate the four parameters in the NIG-distribution? I have a data material
with stock returns that I want to fit the parameters to.
--
View this message in context:
http://r.789695.n4.nabble.com/Generalized-Hyperbolic-distri
Is it possible to create weighted boxplots or violin plots in lattice?
It seems that you can specify weights for panel.histogram() and
panel.densityplot(), but not for panel.bwplot or panel.violin().
Please let me know if I've missed something in the package documentation.
Thanks!
--
Raphael
I'm using the survey api. I am taking 1000 samples of size of 100 and replacing
20 of those values with missing values. Im trying to use sequential hot deck
imputation, and thus I am trying to figure out how to replace missing values
with the value before it. Other things I have to keep in mind
Hi all, I am to find some way on how I can tell R to use this small number
10^-20 as zero by default. This means if any number is below this then that
should be treated as negative, or if I divide something by any number less than
that (in absolute term) then, Inf will be displayed etc.
I have
Hi everyone,
I'm using R, Latex and Sweave for some years now, but today it confuses me alot:
Running Sweave produces only figures in .pdf format, no .eps figures.
The header looks like this:
<>=
There was no error message.
Does anybody have an idea?
Any changes in the Sweave-package?
Or a mi
Thanks Max. I'm using now the library caret with my data. But the models
showed a correlation under 0.7. Maybe the problem is with the variables that
I'm using to generate the model. For that reason I'm asking for some
packages that allow me to reduce the number of feature and to remove the
worst f
Is there a way to generate a new dataframe that produces x lines based on the
contents of a column?
for example: I would like to generate a new dataframe with 70 lines of data[1,
1:3], 67 lines of data[2, 1:3], 75lines of data[3,1:3] and so on up to numrow =
sum(count).
> data
pop fam yesorno
Hi Frank,
For to answer your request:
> print(f)
Logistic Regression Model
lrm(formula = Ignition ~ FMC + Charge, data = Fire)
Model Likelihood DiscriminationRank Discrim.
Ratio TestIndexes Indexes
Obs
I'm using the survey api. I am taking 1000 samples of size of 100 and
replacing 20 of those values with missing values. Im trying to use
sequential hot deck imputation, and thus I am trying to figure out how
to replace missing values with the value before it. Other things I have
to keep in mind
Dear users,
In a study with recurrent events:
My objective is to get estimates of survival (obtained through a Cox model) by
rank of recurrence and by treatment group.
With the following code (corresponding to a model with a global effect of the
treatment=rx), I get no error and manage to obtai
Hello Folks,
I'm working on trying to scrape my first web site and ran into a issue
because I'm really
don't know anything about regular expressions in R.
library(XML)
library(RCurl)
site <- "http://thisorthat.com/leader/month";
site.doc <- htmlParse(site, ?, xmlValue)
At the ?, I realize that
On May 6, 2011, at 6:00 PM, mk90...@gmx.de wrote:
> Hi everyone,
>
> I'm using R, Latex and Sweave for some years now, but today it confuses me
> alot:
>
> Running Sweave produces only figures in .pdf format, no .eps figures.
>
> The header looks like this:
> <>=
>
> There was no error messag
On May 6, 2011, at 3:15 PM, Christopher G Oakley wrote:
> Is there a way to generate a new dataframe that produces x lines based on the
> contents of a column?
>
> for example: I would like to generate a new dataframe with 70 lines of
> data[1, 1:3], 67 lines of data[2, 1:3], 75lines of data[3,
On May 6, 2011, at 5:17 PM, claire wrote:
How to use the package generalized hyperbolic distribution in order to
estimate the four parameters in the NIG-distribution? I have a data
material
with stock returns that I want to fit the parameters to.
On StackOverfolw you have already been told
On May 6, 2011, at 6:22 PM, Eva Bouguen wrote:
Dear users,
In a study with recurrent events:
My objective is to get estimates of survival (obtained through a Cox
model) by rank of recurrence and by treatment group.
With the following code (corresponding to a model with a global
effect of t
Look at the 'na.locf' function in the 'zoo' package.
On Fri, May 6, 2011 at 5:29 PM, Nick Manginelli wrote:
> I'm using the survey api. I am taking 1000 samples of size of 100 and
> replacing 20 of those values with missing values. Im trying to use sequential
> hot deck imputation, and thus I a
1 - 100 of 104 matches
Mail list logo