Read the help for plot.function, and note the argument "add".
After plotting your first function, you'll want to set "add=TRUE".
Note that the plotting interval defaults to [0,1] which is toadally out
to luntch for the functions you are interested in. You'll want to set
from=0 and to=100 or the
On Nov 12, 2013, at 7:42 PM, array chip wrote:
> Hi Chris,
>
> Thanks for sharing your thoughts.
>
> The reviewer used the heterogeneity that I observed in my study for the power
> analysis. I understand what you have descried. And I agree that with the
> sample size I have, I do not have eno
On 12/11/2013 21:45, Kevin Wright wrote:
I'm on Windows 7.
When I do Rcmd check pkg, I get this error from a .Rd file:
Error in plot.new() : plot region too large
Using the windows() and pdf() devices in interactive mode, the code in the
.Rd file works just fine.
I'm wondering if the graphics
Hello,
I am trying to plot multiple weibull distributions in one graph.
Does anyone know how to produce?
I already have different parameters for weibull distributions (shape and
scale)
Here are examples.
Shape Scale
20112.455 21.657
20102.328 21.486
20092.336 22.642
2
Hi,
You would like to have a look at library(maptools) & library(rgeos)
they have amazingly lot of function for such analysis.
You can also read GPS data directly in R using maptools.
This question is more suitable for r-sig-geo list.
On Tue, Nov 12, 2013 at 11:20 PM, Greg Snow <538...@gmail.co
Hi,
You may also flatten the list of lists and apply the function. In the example I
provided,
lapply(do.call(c,unlist(lst1,recursive=FALSE)),sum)
#or
use one of the functions from this link
http://stackoverflow.com/questions/8139677/how-to-flatten-a-list-to-a-list-without-coercion
lapply(flatt
Hi Anindya,
You may try:
dat1 <- read.table(text="ID Week Event_Occurence
A 1 0
A 2 0
A 3 1
A 4 0
B 1 1
B 2 0
B 3 0
B 4 1",sep="",header=TRUE,stringsAsFactors=FALSE)
with(dat1,tapply(as.logical(Event_Occurence),ID,FUN=which ))
#or
lapply(split(dat1,dat1$ID),function(x) which(!!x[,3]))
A.K
Just trying to understand how geom_abline works with facets in ggplot.
By way of example, I have a dataset of student test scores. These are in a data
table dt with 4 columns:
student: unique student ID
cohort: grouping factor for students (A, B, . H)
subject: subject of the test (Englis
Hi,
You may try:
(Please provide a reproducible example.)
lst1 <- list(list(list(c(14L, 13L, 5L, 4L, 9L), c(14L, 16L, 13L, 2L)),
list(c(3L, 2L, 7L, 1L, 8L), c(1L, 9L, 4L, 8L, 7L, 3L, 5L,
2L), c(4L, 2L, 1L))), list(list(c(7L, 14L, 15L, 4L, 3L),
c(10L, 12L, 6L, 1L)), list(c(5L, 8L, 1
Hi Chris,
Thanks for sharing your thoughts.
The reviewer used the heterogeneity that I observed in my study for the power
analysis. I understand what you have descried. And I agree that with the sample
size I have, I do not have enough power to detect the heterogeneity that I
observed with sig
On 11/13/2013 04:43 AM, Jared Duquette wrote:
Hi all,
The plot.cox.zph function automatically plots the x-axis, but I would like
to change the intervals and labels of my cox.zph plots to 0, 50,100,150,
200. However, I cannot get the function to do so. I tried using the
"axis(..." function, but th
On Nov 12, 2013, at 6:10 PM, array chip wrote:
> Hi, this is a statistical question rather than a pure R question. I have got
> many help from R mailing list in the past, so would like to try here and
> appreciate any input:
>
> I conducted Mantel-Haenszel test to show that the performance of
Hi, this is a statistical question rather than a pure R question. I have got
many help from R mailing list in the past, so would like to try here and
appreciate any input:
I conducted Mantel-Haenszel test to show that the performance of a diagnostic
test did not show heterogeneity among 4 study
On Nov 12, 2013, at 9:43 AM, Jared Duquette wrote:
> Hi all,
> The plot.cox.zph function automatically plots the x-axis, but I would like
> to change the intervals and labels of my cox.zph plots to 0, 50,100,150,
> 200. However, I cannot get the function to do so. I tried using the
> "axis(..." f
It means that 10/10 = 1 < 3.
It also means that what you're trying to do (fitting 10 cases to 12000
variables) is ridiculous (assuming I understand your message
correctly).
Cheers,
Bert
On Tue, Nov 12, 2013 at 1:07 PM, Kripa R wrote:
> Hi I'm getting the following warning msg after ?cv.glmnet
I'm on Windows 7.
When I do Rcmd check pkg, I get this error from a .Rd file:
Error in plot.new() : plot region too large
Using the windows() and pdf() devices in interactive mode, the code in the
.Rd file works just fine.
I'm wondering if the graphics device settings are the culprit and am tryi
You did not follow the posting guide, did you use pure ascii email, and
used illegal characters in your source code. This caused extra work.
Once I cleaned up your characters and made the example self-contained,
the labels worked fine for me. Here's the cleaned-up code:
library(rms)
x1 <- ru
Hi I'm getting the following warning msg after ?cv.glmnet and I'm wondering
what it means...
dim(x) 10 12000;
dim(y) 10; #two groups case=1 and control=0
cv.glmnet(x, y)
Warning message:
Option grouped=FALSE enforced in cv.glmnet, since < 3 observations per fold
Thanks,
.kripa
Hi,
Say I have a following data
ID WeekEvent_Occurence
A 1 0
A 2 0
A 3 1
A 4 0
B 1 1
B 2 0
B 3 0
B 4 1
that whether an individual experienced an event in a particular week.
I wish to create list such as the first element of the list will be a
vector listing the week number when the event
Hi,
You may try:
op <- options(digits=12)
r1 <- 4; c1 <- 4
A <- matrix(scan("file1.txt",n=r1*c1),r1,c1,byrow=TRUE)
options(op) #reset
A.K.
Dear all,
I’m using 'scan' function to import data into a matrix from a .txt
file. The data are numbers with 12 significant digits (e.g.
471773.30
Hi,
You may try:
op <- options(digits=12)
r1 <- 4; c1 <- 4
A <- matrix(scan("file1.txt",n=r1*c1),r1,c1,byrow=TRUE)
options(op) #reset
A.K.
Dear all,
I’m using 'scan' function to import data into a matrix from a .txt
file. The data are numbers with 12 significant digits (e.g.
471773.30987
Hi all,
I've been struggling along with this for a while, so your help would be
greatly appreciated.
Using an array of prices from a T-SQL database for a number of stocks I
wish to calculate the volatility of returns for these stocks, which will
then be multiplied by the weight of that stock in a
Hi,
Try:
Reduce(function(...) cbind2(...),Zlist)
A.K.
Suppose I have three matrices, such as the following:
mat1 <- Matrix(rnorm(9), 3)
mat2 <- Matrix(rnorm(9), 3)
mat3 <- Matrix(rnorm(9), 3)
I now need to column bind these and I could do the following if
there were only two of those mat
Hi all,
The plot.cox.zph function automatically plots the x-axis, but I would like
to change the intervals and labels of my cox.zph plots to 0, 50,100,150,
200. However, I cannot get the function to do so. I tried using the
"axis(..." function, but that did not override the existing plot labels.
Be
I am new to R and have already posted this question on stack overflow. The
problem is that I did not understand the answers as the R documentation about
the discussed functions (e.g. 'convolve') is quite complicated for a newbie
like me. Here's the question:
I have a big text file with more t
Hello,
Maybe using ?Reduce:
Zlist <- c(mat1, mat2, mat3)
Z <- Reduce(cbind2, Zlist)
Ztmp <- cbind2(mat1, mat2)
Z2 <- cbind2(Ztmp, mat3)
identical(Z, Z2) # TRUE
Also, I prefer list(mat1, mat2, mat3), not c().
Hope this helps,
Rui Barradas
Em 12-11-2013 17:20, Doran, Harold escreveu:
Sup
> You can use the "offset" function as part of a formula in "lm" (and
> other model fitting functions) to set a specific slope or set of
> slopes. Using this up front will give you the correct residuals,
> standard errors, etc. This is better than trying to modify a fitted
> regression object.
G
> Even if this is possible, won't all the other estimates (i.e., standard error
> of betas) produced be junk since they aren't derived from the associated
> estimators?
That was actually my primary concern. It looks like the offset is the
solution.
Thanks.
Collin.
>
> Michael
>
Dila, take a look at this:
http://r.789695.n4.nabble.com/How-to-replace-all-lt-NA-gt-values-in-a-data-frame-with-another-not-0-value-td2125458.html
Does that help?
Best,
Collin.
On Tue, 12 Nov 2013, dila radi wrote:
> Hi all,
>
> I have a data set with missing value. I w
You can use the "offset" function as part of a formula in "lm" (and
other model fitting functions) to set a specific slope or set of
slopes. Using this up front will give you the correct residuals,
standard errors, etc. This is better than trying to modify a fitted
regression object.
On Tue, Nov
I don't know of any packages specifically for gps coordinates, but
more generally there are packages for spatial statistics (of which the
gps coordinates would be a special case).
Check the CRAN Taskviews Spatial and SpatialTemporal for descriptions
of packages that will do what you ask (and a lot
On Nov 12, 2013, at 11:59, wrote:
>> Think you're going to have to explicitly define "hand tailor." I
>> haven't a clue what you mean. (Someone else might of course).
>
> Ah, I want to set a specific coefficient value for each of the terms
> rather than rely on training. Thus given:
>
> y
http://www.math.uvic.ca/faculty/reed/dPlN.3.pdf is the original ref and has the
equations.
library(VGAM) for *pareto() and library(stats) for *lnorm() should get you most
of the way there.
On Nov 12, 2013, at 10:47 AM, "b. alzahrani"
wrote:
> Hi guys
> I would like to generate random number
Hello,
I have a problem with the kriging function. I get this error:
"Error in solve.default(matrix(A, n + 1, n + 1)) :
system is computationally singular: reciprocal condition number =
5.53457e-17"
But it has worked with other data and now I don't understand why this
error.
Here is a data ex
Hi all,
I have a data set with missing value. I would like to estimate those
missing value by using normal ratio method.
Below is part of my data:
AS BL Serdang Jhr Phg Target station
00.012.8 0.0 23.7 0.0
60.081.7 0.2 0.0 NA
01.
I also see this behaviour and would also love to hear about a work-around
for it.
Best
Davis
On Wednesday, April 18, 2012 8:35:06 PM UTC+1, regcl wrote:
>
> I ues R on a Linux server running under Screen or Emacs server from a Mac
> desktop.
>
> I use Screen and/or Emacs server so that I can
Suppose I have three matrices, such as the following:
mat1 <- Matrix(rnorm(9), 3)
mat2 <- Matrix(rnorm(9), 3)
mat3 <- Matrix(rnorm(9), 3)
I now need to column bind these and I could do the following if there were only
two of those matrices because cbind2() has an x and y argument
Zlist <- c(mat
kindly help correct this program as given below i run and run is given me some
error and i want to estimate
robust MM
rm(list=ls())
require(stats)
require(robustbase)
x1<-as.matrix(c(5.548,4.896,1.964,3.586,3.824,3.111,3.607,3.557,2.989))
y<-as.matrix(c(2.590,3.770,1.270,1.445,3.290,0.93
You could use a slider to move along the x-axis looking at your data in the
specified window width. Below is an example with some fake data.
library(rpanel)
# some fake data
myx <- 623+1:1000
myy <- 0.01*myx + rnorm(1000)
# width of viewing window
mywindow <- 256
# plot the data with a slider
> Think you're going to have to explicitly define "hand tailor." I
> haven't a clue what you mean. (Someone else might of course).
Ah, I want to set a specific coefficient value for each of the terms
rather than rely on training. Thus given:
y ~ x0 + x1 + x2 + a
I would like to set:
a =
Think you're going to have to explicitly define "hand tailor." I
haven't a clue what you mean. (Someone else might of course).
-- Bert
On Tue, Nov 12, 2013 at 8:33 AM, wrote:
> Greetings, I'm working on a project where I want to hand-tailor an lm.
> Specifically I want to construct an lm with a
Greetings, I'm working on a project where I want to hand-tailor an lm.
Specifically I want to construct an lm with an existing formula and then
hand tailor the coefficients myself. Is there an established method for
that other than manipulating the $coefficients values?
Thank you,
Colli
There's a recipe involving two exponentials and a normal deviate as formula (6)
in (1st hit on Google for "dpln distribution").
http://www.math.uvic.ca/faculty/reed/dPlN.3.pdf
It should be a no-brainer to code it up in R.
-pd
On 12 Nov 2013, at 16:47 , b. alzahrani wrote:
> Hi guys
> I woul
Hi guys
I would like to generate random number Double Pareto Log Normal Distribution
(DPLN). does anyone know how to do this in R or if there is any built-in
function.
Thanks
**
Bander
*
wsl.ch> writes:
>
> Hello
> I'm working with mixed effects models using lmer() and have some
> problems to get all variance components of the model's random
> effects. I can get the variance of the random effect out of the
> summary and use it for further calculations, but not the variance
>
Very helpful, many thanks.
On 12 November 2013 16:09, Rui Barradas wrote:
> Hello,
>
> Once again, use lapply.
>
> mlist <- lapply(seq_along(m2), function(i) m2[[i]])
> names(mlist) <- paste0("mod", seq_along(mlist))
>
> slist <- lapply(mlist, summary)
>
>
> plist <- lapply(slist, `[[`, 'p.table'
Hello,
Once again, use lapply.
mlist <- lapply(seq_along(m2), function(i) m2[[i]])
names(mlist) <- paste0("mod", seq_along(mlist))
slist <- lapply(mlist, summary)
plist <- lapply(slist, `[[`, 'p.table')
Hope this helps,
Rui Barradas
Em 12-11-2013 13:28, Kuma Raj escreveu:
Thanks for the s
I would like to have a relative x-axis in r. I am reading time seris from
an excel file and I want to show in a plot and also I want to have a window
which moves over the values.
My aim ist to see which point belongs to which time(row number in excel
file). i.e I am reading from 401th row in 1100t
Dear all,
I would like to ask you if there are any gps libraries.
I would like to be able to handle them,
-like calculate distances in meters between gps locations,
-or find which gps location is closer to a list of gps locations.
Is there something like that in R?
I would like to tthank you in
:
kindly help correct this program as given below i run and run is given me some
error
rm(list=ls())
require(stats)
require(robustbase)
x1<-as.matrix(c(5.548,4.896,1.964,3.586,3.824,3.111,3.607,3.557,2.989))
y<-as.matrix(c(2.590,3.770,1.270,1.445,3.290,0.930,1.600,1.250,3.450))
kindly help correct this program as given below i run and run is given me some
error
rm(list=ls())
require(stats)
require(robustbase)
x1<-as.matrix(c(5.548,4.896,1.964,3.586,3.824,3.111,3.607,3.557,2.989))
y<-as.matrix(c(2.590,3.770,1.270,1.445,3.290,0.930,1.600,1.250,3.450))
x2<-as.
Thanks for the script which works perfectly. I am interested to do
model checking and also interested to extract the coefficients for
linear and spline terms. For model checkup I could run this script
which will give different plots to test model fit: gam.check(m2[[1]]).
Thanks to mnel from SO I co
We would need to see the contents of the "mplus" file that is read in. Quite
possibly it's overwriting either "iter" or "count" with an incompatible data
type.
I would also recommend contactingWalter Leite directly to verify you have a
proper mplus source file.
/* partial quote follows */
Th
Hi Jim, it worked nicely.
par(mar) looks to control the space at the bottom left right and top.. What If
I want to increase a bit the distance between the plotted matrix and the added
color legend ? They look to be at my version slightly packed together.
Regards
Alex
On Monday, November 11,
Hello,
Use nested lapply(). Like this:
m1 <- lapply(varlist0,function(v) {
lapply(outcomes, function(o){
f <- sprintf("%s~ s(time,bs='cr',k=200)+s(temp,bs='cr') +
Lag(%s,0:6)", o, v)
gam(as.formula(f),family=quasipoisson,na.action=na.omit,data=df)
})})
m1 <-
Hi,
Thanks for the advise. I have solved by following one option on the drop
down menu which I did not see earlier and got:
source("c:\\Users\\...\\filename.R")
Thank you for the prompt reply,
Luca
2013/11/12 Pascal Oettli
> Hello,
>
> What is the result when you use source("C:/Users/...R")
Hello all,
I cant find a thread with a problem similar to mine. I am trying to create a
random fields and I get the same error:
library(geoR)
N<-10
nslon<-250
nslat<-250
range<-10
sim2 <- grf(nslon*nslat, grid="reg", nx=nslon, ny=nslat,cov.pars=c(1,
range),
nsim=N, cov.model = "ex
I have asked this question on SO, but it attracted no response, thus I am
cross- posting it here with the hope that someone would help.
I want to estimate the effect of pm10 and o3 on three outcome(death, cvd
and resp). What I want to do is run one model for each of the main
predictors (pm10 and
On Mon, 11 Nov 2013 18:02:19 +0300
Keniajin Wambui wrote:
> I am using R 3.0.2 on a 64 bit machine
>
> I have a data set from 1989-2002. The data has four variables
> serialno, date, admission ward, temperature and bcg scar.
>
> serialno admin_ward date_admn bcg_scar temp_axilla yr
> 70162W
59 matches
Mail list logo