Prof Brian Ripley stats.ox.ac.uk> writes:
>
> On Mon, 11 Jul 2011, Tomaz wrote:
>
> > I upgraded R on windows xp from 2.12.2 to 2.13.1 and now I can not
> > process Rnw files with windows cp1250 encoding. Sweave complains:
>
> Which is of course not an ISO Standard encoding. One way out is to
Any NA values or values outside the support region of your distribution?
UWe
On 11.07.2011 23:21, Peter Maclean wrote:
I am trying to estimate a gamma function using real data and I am getting the
following error messages.
When I set a lower limit; the error message is "L-BFGS-B needs finite
On 12.07.2011 09:01, Tomaz wrote:
Prof Brian Ripley stats.ox.ac.uk> writes:
On Mon, 11 Jul 2011, Tomaz wrote:
I upgraded R on windows xp from 2.12.2 to 2.13.1 and now I can not
process Rnw files with windows cp1250 encoding. Sweave complains:
Which is of course not an ISO Standard enco
> > flags <- c(rep(1, length(patient_indices)), rep(0,
> > length(control_indices)))
> > # dataset is a data.frame and param the parameter to be analysed:
> > data1 <- dataset[,param][c(patient_indices, control_indices)]
> > fit1 <- glm(flags ~ data1, family = binomial)
> > new.data<- seq(0, 3
On 12/07/11 09:04, Joseph Park wrote:
Greetings,
I would like to estimate a spectral coherence between
two timeseries. The stats : spectrum() returns a coh matrix
which estimates coherence (squared).
A basic test which from which i expect near-zero coherence:
x = rnorm(500
Hi,
Am 11.07.2011 22:57, schrieb Lyndon Estes:
> ctch[ctch$threshold == 3.5, ]
> # [1] threshold val tpfptnfntpr
> fpr tnr fnr
> #<0 rows> (or 0-length row.names)
this is the very effective FAQ 7.31 trap.
http://cran.r-project.org/doc/FAQ/R-F
Dear all,
I would like to use the apply or a similar function belonging to this
family, but applying for each column (or row) but let say for each q
columns. For example I would like to apply a function FUN for the first q
columns of matrix X then for q+1:2*q and so on. If I do apply (X, 2, FUN)
Hi all,
I first create a matrix/data frame called "d2" if another matrix
accomplishes some restrictions "dacc2"
da2<-da1[colSums(dacc2)>9,]
da2<-da2[(da2[,13]=24),]
write.csv(da2, file =paste('hggi', i,'.csv',sep = ''))
The thing is if finally da2 cannot get/passs the filters, it cannot writte a
Dear all,
I would like to obtain the Brier score prediction error at different times t
for an extended Cox model. Previously I have used the 'pec' function
(pec{pec}) to obtain prediction error curves for standard Cox PH models but
now I have data in the counting process format (I have a covariate
Dear R user,
After I imported data (csv format) in R, I called it out. But it is in
non-numeric format.
Then using "as.numeric" function.
However, the output is really awful !
> PE[1,90:99]
V90 V91 V92 V93 V94
V95 V96
Dear R community,
cloudnumbers.com provides researchers and companies with the resources
to perform high performance calculations in the cloud. As
cloudnumbers.com's community manager I may invite you to register and
test R on a computer cluster in the cloud for free:
http://my.cloudnumbers.com/re
Hi,
You don't provide us with a reproducible example, so I can't provide you with
actual code. But two approaches come to mind:
1. Create da2 with one row and n columns, then change the appropriate elements,
if any, based on your conditions.
2. Do the conditional parts, then check to see whether d
Jessica,
This would be easier to solve if you gave us more information, like str(PE).
However, my guess is that your data somewhere has a nonnumeric value in that
column, so the entire column is being imported as factor. It's not
"really awful" -
R is converting those factor values to their numer
On 07/12/2011 06:38 PM, Jessica Lam wrote:
Dear R user,
After I imported data (csv format) in R, I called it out. But it is in
non-numeric format.
Then using "as.numeric" function.
However, the output is really awful !
PE[1,90:99]
V90 V91 V92 V93
Have you noted you sent your message to the R-help list only and forgot
to include the original poster? You also forgot to cite the original
question (and any other former parts of the thread as far as there was
any). Please do so when sending messages to a mailing list such as R-help.
Thanks,
gfcoeffs <- function(s, n) {
t <- seq(-n,n,1) ## assuming 2*n+1 taps
return ( exp(-(t^2/(2*s^2)))/sqrt(2*pi*s^2) )
}
2011/6/29 Martin Wilkes :
> I want to filter my time series with a low-pass filter using a Gaussian
> smoothing function defined as:
>
> w(t) = (2πσ^2)^0.5 exp(-t^2/2σ^2)
>
> I
On Tue, Jul 12, 2011 at 7:15 AM, Markus Schmidberger wrote:
> This is only a selection of our top features. To get more information
> check out our web-page (http://www.cloudnumbers.com/) or follow our blog
> about cloud computing, HPC and HPC applications (with R):
> http://cloudnumbers.com/blog
Hello all,
Could someone help me with the time zones in understandable & practical way?
I got completely stucked with this.
Have googled for a while and read the manuals, but without solutions...
---
When data imported from Excel 20
Hello all,
Could someone help me with the time zones in understandable & practical way?
I got completely stucked with this.
Have googled for a while and read the manuals, but without solutions...
---
When data imported from Exce
Uwe Ligges statistik.tu-dortmund.de> writes:
>
>
> On 12.07.2011 09:01, Tomaz wrote:
> > Prof Brian Ripley stats.ox.ac.uk> writes:
> >
> >>
> >> On Mon, 11 Jul 2011, Tomaz wrote:
> >>
> >>> I upgraded R on windows xp from 2.12.2 to 2.13.1 and now I can not
> >>> process Rnw files with windows
Thanks you! I should have realized that without explicitly engaging
some form of averaging (which raises a windowing question) that the
coh is always 1.
On 7/12/2011 4:48 AM, Rolf Turner wrote:
On 12/07/11 09:04, Joseph Park wrote:
Greetings,
I would like to es
Július 7-től 14-ig irodán kívül vagyok, és az emailjeimet nem érem el.
Sürgős esetben kérem forduljon Kárpáti Edithez (karpati.e...@gyemszi.hu).
Üdvözlettel,
Mihalicza Péter
I will be out of the office from 7 July till 14 July with no access to my
emails.
In urgent cases please contact Ms. Ed
Thanks both of you for help!
This is my conclusion based on Rolf's and David's suggestion: (x=data.frame)
By "for-looping" over a data.frame and deleting certain rows with
x=x[-i,]
its better to collect all rows which need to be deleted in a vector and do
one final delete step:
collecting: vect
Dear all,
I am new to programming in R.
I deal with microarray data,which is a data frame object type. I need to carry
out a few statistical procedures on this, one of them being the pearson
corelation. I need to do this between each row which is a gene. So the desired
result is a square matri
Dear R-Users,
I run a MC-Simulation using the the packages "foreach" and "doMC" on a
PowerMac with 24 cores. There are roughly a hundred parametersets and I
parallelized the program in a way, that each core computes one of these
parametersets completely.
The problem ist, that some parametersets t
Also note that the statistical method you are using does not seem in line
with decision theory, and you are assuming that the threshold actually
exists. It is seldom the case that the relationship of a predictor with the
response is flat on at least one side of the threshold. A smooth prediction
Hi Jim,
by dropping them down it gives 1 day less than it should do, on all timezone
notations CEST and CET.
> start
[1] "2002-09-04 CEST" "2000-07-27 CEST" "2003-01-04 CET" "2001-06-29 CEST"
"2005-01-12 CET" "2000-05-28 CEST" "2002-06-01 CEST" "2000-06-02 CEST"
"2000-02-27 CET" "2000-09-29
On Tue, Jul 12, 2011 at 6:58 AM, B Laura wrote:
> Hello all,
>
> Could someone help me with the time zones in understandable & practical way?
> I got completely stucked with this.
>
> Have googled for a while and read the manuals, but without solutions...
>
>
>
> --
It may be helpful to make sure that, in the dialog that pops up when saving a
spreadsheet to CSV, the option "Save cell content as shown" is checked - that
would leave numbers as numbers, not wrapping them in "". That has helped me at
least in a similar situation!
Rgds,
Rainer
On Tuesday 12 Ju
peter_petersen gmail.com> writes:
> I run a MC-Simulation using the the packages "foreach" and "doMC" on a
> PowerMac with 24 cores. There are roughly a hundred parametersets and I
> parallelized the program in a way, that each core computes one of these
> parametersets completely.
>
> The probl
On 11-07-12 6:42 AM, Tomaz wrote:
Uwe Ligges statistik.tu-dortmund.de> writes:
On 12.07.2011 09:01, Tomaz wrote:
Prof Brian Ripley stats.ox.ac.uk> writes:
On Mon, 11 Jul 2011, Tomaz wrote:
I upgraded R on windows xp from 2.12.2 to 2.13.1 and now I can not
process Rnw files with wi
>matplot(timestamp,xymatrix,type='l')
where timestamp is a vector filled with POSIXct objects and xymatrix is
a numeric 2x2 matrix plots but the horizontal axis labels are raw
unformatted timestamps.
I would like to format these in any of the available codes for strftime,
for instance format
Dear Gabor
http://rwiki.sciviews.org/doku.php?id=tips:data-io:ms_windows&s=excel
doesnt describe handling dates with daylight saving time issues.
R classes Date can remove time and timezone, however calculating days
difference between two manipulated variables same problem appear if handling
thes
Hi,
I want to apply a function to a matrix, taking the columns 3 by 3. I could use
a for loop:
for(i in 1:3){ # here I assume my data matrix has 9 columns
j = i*3
set = my.data[,c(j-2,j-1,j)]
my.function(set)
}
which looks cumbersome and possibly slow. I was hoping there is some function
in th
Hi all,
Is there any code to run fixed effects Tobit models in the style of Honore
(1992) in R?
(The original Honore article is here:
http://www.jstor.org/sici?sici=0012-9682%28199205%2960%3A3%3C533%3ATLALSE%3E2.0.CO%3B2-2)
Cheers
David
[[alternative HTML version deleted]]
_
Hello,
Are there any built in or user defined functions for printing the date created
or date updated for a given file? Ideally a function that works across
operating systems.
Thanks!
Scott Chamberlain
[[alternative HTML version deleted]]
_
On Jul 12, 2011, at 7:27 AM, Mitra, Sumona wrote:
Dear all,
I am new to programming in R.
You see to think there is a "++" operation in R. That is not so.
I deal with microarray data,which is a data frame object type. I
need to carry out a few statistical procedures on this, one of them
Hi,
file.info()
does that.
Cheers
Am 12.07.2011 15:29, schrieb Scott Chamberlain:
> Hello,
>
> Are there any built in or user defined functions for printing the date
> created or date updated for a given file? Ideally a function that works
> across operating systems.
>
>
> Thanks!
> Scott
On 12 July 2011 12:27, Mitra, Sumona wrote:
> Dear all,
>
> I am new to programming in R.
>
You sure are ;-)
I deal with microarray data,which is a data frame object type. I need to
> carry out a few statistical procedures on this, one of them being the
> pearson corelation. I need to do this
Hi Frederico. I would keep the data as it is, create two small vectors
referring to the ranges and use a mapply (as a sapply but with multiple
variables) for the function. Hope the example below is helpful, although as
usual someone out there will have a better solution for it.
> dta <- c()
> f
Eik,
Thanks very much!
Scott
On Tuesday, July 12, 2011 at 8:34 AM, Eik Vettorazzi wrote:
> Hi,
> file.info (http://file.info)()
> does that.
>
> Cheers
>
> Am 12.07.2011 15:29, schrieb Scott Chamberlain:
> > Hello,
> >
> > Are there any built in or user defined functions for printing the d
Hi,
I am trying to do a lasso regression using the lars package with the following
data (see attached):
FastestTime
WinPercentage
PlacePercentage
ShowPercentage
BreakAverage
FinishAverage
Time7Average
Time3Average
Finish
116.90
0.14
0.14
0.29
4.43
3.29
117.56
117.77
5.00
116.23
I just realised that:
apply(matrix(1:dim(my.data)[2], nrow =3), 2,
function(x){my.function(my.data[,x])})
is the simplest possible method.
bw
F
On 12 Jul 2011, at 14:44, Filipe Leme Botelho wrote:
> Hi Frederico. I would keep the data as it is, create two small vectors
> referring to the r
Hi,
I am trying to do a lasso regression using the lars package with the following
data:
FastestTime
WinPercentage
PlacePercentage
ShowPercentage
BreakAverage
FinishAverage
Time7Average
Time3Average
Finish
116.90
0.14
0.14
0.29
4.43
3.29
117.56
117.77
5.00
116.23
0.29
0.43
0
> * David Winsemius [2011-07-11 18:16:25 -0400]:
>
> What is the point of offering this code?
To illustrate what I was talking about (code is its own specification).
I hoped that there was already a package doing that (and more in that
direction).
> It seems to be doing what you want
yes.
> Ar
On Jul 12, 2011, at 9:53 AM, Heiman, Thomas J. wrote:
Hi,
I am trying to do a lasso regression using the lars package with the
following data (see attached):
Nothing attached. (And now you have also sent an exact duplicate.)
snipped failed attempt to include data inline that was sabotaged
Hi,
Hopefully I got the formatting down.. I am trying to do a lasso regression
using the lars package with the following data (the data files is in .csv
format):
V1 V2 V3 V4
V5 V6 V7
I have two data frames:
> str(ysmd)
'data.frame': 8325 obs. of 6 variables:
$ X.stock : Factor w/ 8325 levels "A","AA","AA-",..: 2702
6547 4118 7664 7587 6350 3341 5640 5107 7589 ...
$ market.cap : num -1.00 2.97e+10 3.54e+08 3.46e+08 -1.00
...
$ X52.
How can perform logarithmic binning in the scatterplot? I could only take the
log of the variables and plot them, but I am sure that is not the way. I
have a very huge data, and would want to plot those high density
scatterplots and code then with different colors for the bins/density.
--
View thi
If you switch directly to the multicore package you can use the
mclapply() function. There, check for the parameter mc.preschedule=T /
F. You can use this parameter to improve the load balancing.
I do not know a parameter to tune foreach with this parameter.
Best
Markus
Am Dienstag, den 12.07
Many Thanks¡¡¡
I will try this night, I have read this I think could help me.
I´m conscient the question was badly formulated now, I will try to explain
better next time¡¡¡
On a side note: apply always accesses the function you use at least once. If
the input is a dataframe without any rows but
Hi all,
I have this information on a file ht.txt, imagine it is a data frame without
labels:
1 1 1 8 1 1 6 4 1 3 1 3 3
And on other table called "pru.txt" I have sequences similar this
4 1 1 8 1 1 6 4 1 3 1 3 3
1 6 1 8 1 1 6 4 1 3 1 3 3
1 1 1 8 1 1 6 4 1 3 1 3 3
6 6 6 8 1 1 6 4 1 3 1 3 3
I want
Hi,
I am trying to do a lasso regression using the lars package with the following
data (see attached):
FastestTime
WinPercentage
PlacePercentage
ShowPercentage
BreakAverage
FinishAverage
Time7Average
Time3Average
Finish
116.90
0.14
0.14
0.29
4.43
3.29
117.56
117.77
5.00
116.23
Hi,
I have a data frame of about 700 000 rows which look something like this:
DateTemperature Category
2007102 16 A
2007102 17 B
2007102 18 C
but need it to be:
Date TemperatureA TemperatureB TemperatureC
2007102
Dear All,
I have a collections of spatial data. I have to analyze pairs of these
point patterns to test their spatial interaction. I was moving towards
the cross K Ripley's function. The problem, however, are the following:
1) What is the best way to get a single real value that represents the
On Jul 12, 2011, at 10:12 AM, Sam Steingold wrote:
I have two data frames:
str(ysmd)
'data.frame': 8325 obs. of 6 variables:
$ X.stock : Factor w/ 8325 levels
"A","AA","AA-",..: 2702 6547 4118 7664 7587 6350 3341 5640 5107
7589 ...
$ market.cap :
On Tue, 2011-07-12 at 10:12 -0400, Heiman, Thomas J. wrote:
> Hi,
>
> Hopefully I got the formatting down.. I am trying to do a lasso regression
> using the lars package with the following data (the data files is in .csv
> format):
>
> V1 V2 V3
Hi Trying,
It would be helpful if you provided reproducible examples. It would
also be polite to sign a name so that we have something by which to
address you.
On Tue, Jul 12, 2011 at 8:00 AM, Trying To learn again
wrote:
> Hi all,
>
> I have this information on a file ht.txt, imagine it is a da
On Jul 12, 2011, at 10:12 AM, Heiman, Thomas J. wrote:
Hi,
Hopefully I got the formatting down.. I am trying to do a lasso
regression using the lars package with the following data (the data
files is in .csv format):
V1 V2 V3 V4
On Jul 12, 2011, at 8:42 AM, anglor wrote:
Hi,
I have a data frame of about 700 000 rows which look something like
this:
DateTemperature Category
2007102 16 A
2007102 17 B
2007102 18 C
but need it to be:
Date T
Hi,
(i) As David suggested, please use `dput` to provide examples of data!
(ii) The nut of your problem is that you are giving lars an object
that it is not expecting. It wants a *matrix* for its `x` variable, as
you'll see in the help for ?lars.
So, as long as this expression:
R> is.numeric(x)
Hi:
Try the cast() function in the reshape package. Using d as the name of
your data frame,
library(reshape)
cast(d, Date ~ Category, value = 'Temperature')
Date A B C
1 2007102 16 17 18
HTH,
Dennis
On Tue, Jul 12, 2011 at 5:42 AM, anglor wrote:
> Hi,
>
> I have a data frame of about
On Tue, Jul 12, 2011 at 8:57 AM, B Laura wrote:
> Dear Gabor
>
> http://rwiki.sciviews.org/doku.php?id=tips:data-io:ms_windows&s=excel
> doesnt describe handling dates with daylight saving time issues.
>
Two references were given and its discussed in the R News article. It
was also mentioned ove
On Jul 11, 2011, at 9:16 PM, Steve Parker wrote:
> Hi there,
> I am using the RODBC library to connect to an Empress database.
> I have installed the ODBC data source with the server DNs number and port,
> and named the source "Trawl".
> It is the odbcDriverConnect that seems to have the problem,
If you don't need POSIXt types, as Gabor says don't use them. However, there
are good reasons to use them sometimes, and the most workable solution I have
found is to set your default timezone in R to a non-DST timezone before you
convert from character to POSIXct. This is dependent on your OS a
Hello,
In my lab we use a four parameter logistic fit model for our ELISA data
(absorbance values). We are currently testing the use of different solvents
and need to find a way to add a correlation value (such as an R squared or
something similar) so we can test different solvents in making this
Thanks Peter, Ted!
Best, Anirban
On Tue, Jul 12, 2011 at 4:54 AM, Ted Harding wrote:
> On 11-Jul-11 07:55:44, Anirban Mukherjee wrote:
>> Hi all,
>>
>> I wanted to mark the estimation sample: mark what rows (observations)
>> are deleted by lm due to missingness. For eg, from the original
>> exam
I've written out codes for one particular file, and now I want to generate
the same kind of graphs and files for the rest of similar data files.
When I plugged in these codes, R produced only one plot for the file
"eight", and it states my error(see below) I have edited and checked my
codes so man
On 2011-07-12 07:03, Sam Steingold wrote:
[snip]
the totally unnecessary semi-colons)
then why are they accepted?
optional syntax elements suck...
They're accepted because they *can* be useful (multiple
statements on one line).
Is there *any* language that can *not* be abused?
Peter Ehlers
Hi Susie,
At a guess, there are no non-missing arguments to min or max.
But no, we can't help you. You haven't provided a minimal reproducible
example, and without knowing anything about your data it is impossible
for the list to offer any constructive suggestions.
The posting guide offers sugge
On 07/12/2011 09:53 AM, Heiman, Thomas J. wrote:
## define x and y
x= x<-crs[,9]#predictor variables
y= y<-crs[1:8,] #response variable
This cannot be correct. The response variable is a vector, while the
predictor variables form a matrix. You have the response variable
consisting
This is just posed out of curiosity, (not as a criticism per se). But what is
the functional role of the argument na.rm inside the mean() function? If there
are missing values, mean() will always return an NA as in the example below.
But, is there ever a purpose in computing a mean only to recei
In SQL, the default is to ignore NULL (equivalent to NA in R).
However, it can be dangerous to fail to verify how much data was actually used
in an aggregation, so the logic behind the default na.rm setting may be one of
encouraging the user to take responsibility for missing data.
-
On 12/07/2011 12:26 PM, Doran, Harold wrote:
This is just posed out of curiosity, (not as a criticism per se). But what is
the functional role of the argument na.rm inside the mean() function? If there
are missing values, mean() will always return an NA as in the example below.
But, is there e
Hi Harold,
Many (most?) of the statistics function have a similar argument. I
suspect it is sort of to warn the user---you have to be explicit about
it rather than the program just silently removing or ignoring values
that would not work in the function called. I can think of one
example where I
Dear all,
I have a problem and it is very difficult for me to get a code.
I am reading a file(attached with this mail) using the code-
df=read.table("summary.txt",fill=T,sep="",colClasses = "character",header=T)
and dataframe df is like this-
V1V2 CaseA CaseC CaseG CaseT new
10 13
Hi Vikas,
Here is one way:
df <- read.table("summary.txt", header = TRUE)
str(df)
df[, "total"] <- rowSums(df[, 3:6])
df[, 3:6] <- apply(df[, 3:6], 2, function(x) x / df[, "total"] * df[,
"new"] * 2)
> head(df)
V1V2 CaseA CaseC CaseG CaseT new total
1 10 135344109 0 024
## Hello.. I have asked a similar question, but this is not fixed as before.
## I am running the following using Ubuntu OS:
R version 2.13.1 (2011-07-08)
Copyright (C) 2011 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-pc-linux-gnu (64-bit)
## when I do this:
On 7/7/2011 3:23 PM, elephann wrote:
Hi everyone!
I have a data frame with 1112 time series and I am going to randomly
sampling r samples for z times to compose different portfolio size(r
securities portfolio). As for r=2 and z=1,that's:
z=1
A=seq(1:1112)
x1=sample(A,z,replace =TRUE)
x2=s
I've written out codes for one particular file, and now I want to generate
the same kind of graphs and files for the rest of similar data files.
For example, a file "8.csv" would look like such:
enc_callee inout o_duration type
A out 342 de
B in 234 de
C
Probability <- function(N, f, m, b, x, t) {
#N is the number of lymph nodes
#f is the fraction of Dendritic cells (in the correct node) that have
the
antigen
#m is the number of time steps
#b is the starting position (somewhere in the node or somewhere in the
gap
b
Hi,
I'm currently trying to calculate local Getis-Ord Gi* statistics for a
169x315 cell matrix of temperature values, below is the code I currently
have (diffc is the data vector I am removing NaN values from, and I am
moving said values to diffD; -999 represents NaN values; id contains ID
values
It works well. Thanks so much.
--
View this message in context:
http://r.789695.n4.nabble.com/as-numeric-tp3661739p3662671.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/l
Hello, I'm new to this list. Sorry if my question or parts of it already came
up before.
For my research in geostatistics, I am working with large sets of data in R
(basically large matrices containing discrete x and y coordinates and a
value for a certain parameter). These sets are obtained by kr
Dear Susie,
See inline for some suggestions, but generally, I think you would
benefit from breaking this down into smaller pieces. The error you
are getting indicates the problem has to do with the plotting, but
that will be trickier to isolate while also dealing with reading in
data, looping, et
Dear list, I'm wondering if anyone can help me calculate the deviance
of either a zeroinfl or hurdle model from package pscl?
Even if someone could point me to the correct formula for calculating
the deviance, I could do the rest on my own.
I am trying to calculate a pseudo-R-squared measure based
On 12-Jul-11 17:18:26, mousy0815 wrote:
> Probability <- function(N, f, m, b, x, t) {
> #N is the number of lymph nodes
> #f is the fraction of Dendritic cells (in the correct node) that
have
> the
> antigen
> #m is the number of time steps
> #b is the starting position (som
Merging two posts (data and questions); see inline below.
On 7/11/2011 7:55 PM, Sigrid wrote:
Thank you, Dennis.
This is my regenerated dput codes. They should be correct as I closed off R
and re-ran them based on the dput output.
NB, this is the test dataset used later
structure(list(year
Hello,
I have a sample file:
chr22 100 150 125 21 0.145 +
chr22 200 300 212 13 0.05+
chr22 345 365 351 12 0.09+
chr22 500 750 510 15 0.10+
chr22 500 750 642 9 0.02+
chr22 800 90
Hello:
R has an extensive Help system. Please learn to use it.
?histogram
?help
Also see the online manual tutorial "An Introduction to R"
-- Bert
On Tue, Jul 12, 2011 at 12:41 PM, a217 wrote:
> Hello,
>
> I have a sample file:
>
> chr22 100 150 125 21 0.145 +
> chr22 2
On Jul 12, 2011, at 3:41 PM, a217 wrote:
Hello,
I have a sample file:
chr22 100 150 125 21 0.145 +
chr22 200 300 212 13 0.05+
chr22 345 365 351 12 0.09+
chr22 500 750 510 15 0.10+
chr22 500 750
Carson Farmer gmail.com> writes:
>
> Dear list, I'm wondering if anyone can help me calculate the deviance
> of either a zeroinfl or hurdle model from package pscl?
> Even if someone could point me to the correct formula for calculating
> the deviance, I could do the rest on my own.
What a
when do I need to use which()?
> a <- c(1,2,3,4,5,6)
> a
[1] 1 2 3 4 5 6
> a[a==4]
[1] 4
> a[which(a==4)]
[1] 4
> which(a==4)
[1] 4
> a[which(a>2)]
[1] 3 4 5 6
> a[a>2]
[1] 3 4 5 6
>
seems unnecessary...
--
Sam Steingold (http://sds.podval.org/) on CentOS release 5.6 (Final) X
11.0.60900031
htt
To answer your questions:
Yes, yes, and probably no. You will have to pick up any introductory manual
of R where questions 1 and 2 will be discussed.
For 1: you index x as in x[452,682]. For 2: there are ways to write (and
avoid) loops in R (e.g. for or while loops). Often avoidance is preferable
Well ...
which(a==4)^2
??
-- Bert
On Tue, Jul 12, 2011 at 1:17 PM, Sam Steingold wrote:
> when do I need to use which()?
>> a <- c(1,2,3,4,5,6)
>> a
> [1] 1 2 3 4 5 6
>> a[a==4]
> [1] 4
>> a[which(a==4)]
> [1] 4
>> which(a==4)
> [1] 4
>> a[which(a>2)]
> [1] 3 4 5 6
>> a[a>2]
> [1] 3 4 5 6
>>
>
On Jul 12, 2011, at 4:17 PM, Sam Steingold wrote:
when do I need to use which()?
a <- c(1,2,3,4,5,6)
a
[1] 1 2 3 4 5 6
a[a==4]
[1] 4
a[which(a==4)]
[1] 4
which(a==4)
[1] 4
a[which(a>2)]
[1] 3 4 5 6
a[a>2]
[1] 3 4 5 6
seems unnecessary...
It is unnecessary when `a` is a toy ca
On Tue, Jul 12, 2011 at 1:17 PM, Sam Steingold wrote:
> when do I need to use which()?
See ?which
For examples, try:
example(which)
>> a <- c(1,2,3,4,5,6)
>> a
> [1] 1 2 3 4 5 6
>> a[a==4]
> [1] 4
>> a[which(a==4)]
> [1] 4
>> which(a==4)
> [1] 4
>> a[which(a>2)]
> [1] 3 4 5 6
>> a[a>2]
> [1] 3
Here is a worked example. Can you point out to me where in temp rmean is
stored? Thanks.
Tom
> library(survival)
> library(ISwR)
>
> dat.s <- Surv(melanom$days,melanom$status==1)
> fit <- survfit(dat.s~1)
> plot(fit)
> summary(fit)
Call: survfit(formula = dat.s ~ 1)
time n.risk n.event survi
Hello all,
I am using AddHealth data to fit a cure, aka split population model using nltm.
I am not sure how to account for the complex survey design - does anyone have
any suggestions? Any help would be greatly appreciated!
Sincerely,
Sam
__
R-help
I have 4 columns and 56 rows of made up data that I want to plot as a series
of bar graphs. The idea is to create one bar graph for each of the 4 columns
using a for loop. I tried the following command in RStudio and when I type x
in the console I get just the 4th graph instead of all four graphs.
1 - 100 of 136 matches
Mail list logo