On 03/30/2010 04:39 PM, Dong H. Oh wrote:
...
* checking R code for possible problems ... NOTE
Found possibly global 'T' or 'F' in the following function:
ar.dual.dea
Error in ar.dual.dea(ar.dat, noutput = 1, orientation = 1, rts = 1, ar.l =
matrix(c(0, :
F used instead of FALSE
Executi
> 5000 samples, Exponential distribution (f(x), lambda=0.0005, 0<=x<=360)
If you don't need the truncation at 360 you can just use rgamma to generate
the exponentials
rgamma(5000,shape=1,rate=0.0005)
(It's not 100% clear but I assume your lambda is an inverse of a scale
parameter)
Why are y
Dear useRs,
I am trying to build my package (nonpareff) which deals with some models of
data envelopment analysis.
The building worked well, but checking complains when it tests examples.
Zipped nonparaeff.Rcheck is attached.
Following is the log.
-
a
Dear useRs,
I am trying to build my package (nonpareff) which deals with some models of
data envelopment analysis.
The building worked well, but checking complains when it tests examples.
Zipped nonparaeff.Rcheck is attached.
Following is the log.
-
a
Hi Everybody,
I am having problem while running a code in R related to knn imputation of
my matrix.
The code i am using is as follows:
t <- read.table(file="/home/ankhee/Desktop/Different_datasets/KADOSH_2005",
check.names=FALSE, row.names=1,colClasses=NA, sep="\t",header=T)
r<-as.matrix(t)
limit
What you have done is calculate correlations between two binary variables. You
say you wanted to calculate the correlation between two truncated variables.
One way to do this here is to make a temporary copy setting the excluded values
to missing, e.g.:
> tBoth <- Both
> is.na(tBoth[tBoth > 2
cor.test(Both[,1],Both[,2])
What does
> Both[,1]
show you?
cor.test(Both[,1]>2.5,Both[,2]>2.5)
What does
> Both[,1] > 2.5
show you?
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
Dear Thomas,
While it may be true that "R (and S) are *accused* of being slow,
memory-hungry, and able to handle only
small data sets" (emphasis added), the accusation is false, rendering the
*accusers* misinformed. Transparency is another, perhaps more interesting
matter. R-users can *experience
Hi,
I've got 4 variables that I want to effectively 'stack' so that I have a
grand R variable and a grand L variable.
This works to achieve that goal:
Twin1cor<-with(twin.wide,cbind(ACDepthR.1,ACDepthL.1))
Twin2cor<-with(twin.wide,cbind(ACDepthR.2,ACDepthL.2))
Both<-rbind(Twin1cor,Twin2cor)
>
Bogaso wrote:
OK. I understood this. But the problem is I have to execute that for atleast
130. Is there any possibility to break that calculation into some
sub-section so that individual section can carry out the calculation
efficiently?
Thanks,
Why do you 'have' to do this? What are you act
It means that in your documentation file (the .Rd file), you have an
entry for 'mydata' that does not appear in the code.
So: your Rd file has:
\usage{
myfunction(mydata,otherarg)
}
and your function looks like this:
myfunction <- function(otherarg,...){etc}
good luck,
remko
-
Hi ,
I'm getting a warning "Data sets with usage in documentation object 'data'
but not in the code"
I'm attaching the image for your reference
http://n4.nabble.com/file/n1738289/dataset.jpg
Thank you
--
View this message in context:
http://n4.nabble.com/Data-sets-with-usage-in-documentation-
OK. I understood this. But the problem is I have to execute that for atleast
130. Is there any possibility to break that calculation into some
sub-section so that individual section can carry out the calculation
efficiently?
Thanks,
--
View this message in context:
http://n4.nabble.com/Problem-
Dear R-helpers,
I tried to build a DLL like I have done so many times, but this time
on my new machine, but it gives the erorr:
(from cmd window)
>R CMD SHLIB Boxcnt.f
MAKE Version 5.2 Copyright (c) 1987, 2000 Borland
Error c:/PROGRA~1/R/R-210~1.1/share/make/winshlib.mk 4: Command syntax error
*
Hi:
2^40
[1] 1.099512e+12
Do you have enough memory for almost 1.1 trillion rows and 40 columns?
This is a good example of the 'power law' that Stephen Strogatz discussed in
his New York Times article today:
http://opinionator.blogs.nytimes.com/2010/03/28/power-tools/?hp
HTH,
Dennis
On Mon, Mar
Hi, good morning,
I got following error which looks strange to me while executing this code :
> temp <- expand.grid(rep(list(c(1,0)),40))
Error in rep.int(rep.int(seq_len(nx), rep.int(rep.fac, nx)), orep) :
invalid 'times' value
In addition: Warning message:
In rep.int(rep.int(seq_len(nx), re
All,
The kohonen predict function is returning NA for SOM predictions
regardless of data used... even the package example for a SOM using
wine data is returning NA's
Does anyone have a working example SOM. Also, what is the purpose of
trainY, what would be the dependent data for an unsupervised SO
yehengxin wrote:
>
> Why does R need the concept of "Vector"? In my opinion, it is a useless
> and confusing concept. A vector is simply a special case of a matrix
> whose row or column number is equal to 1. When I take submatrix from one
> matrix and if row or column number is 1, R will auto
Actually R looks at it the other way around. It regards a matrix as a
special case of a vector. A vector has no dimensions. A vector with
dimensions is an array. An array with two dimensions is a matrix.
Try using drop=FALSE like this:
m <- matrix(1:6, 3)
m[, 2, drop = FALSE]
On Mon, Mar 29,
> Why does R need the concept of "Vector"? In my opinion, it is a useless and
> confusing concept. A vector is simply a special case of a matrix whose row
> or column number is equal to 1. When I take submatrix from one matrix and
> if row or column number is 1, R will automatically convert it i
I construct a 2 x 2 table in Excel which contains the following:
Date Value
12/31/20081.0
If I transfer this range to Rexcel with a put dataframe operation to a
variable x, then x$Date is displayed as
Date
1 2008-12-30 23:00:00
The datetime value is one
> My Name is shruti.when I was checking a package i got a warning "Data sets
> with usage in documentation object 'data' but not in the code" can anyone
> help me with this.
Can you please provide the exact code you entered into your workspace
that gave you this error, so we can reproduce it and h
On 30/03/2010, at 12:13 PM, yehengxin wrote:
>
> Why does R need the concept of "Vector"? In my opinion, it is a useless and
> confusing concept. A vector is simply a special case of a matrix whose row
> or column number is equal to 1. When I take submatrix from one matrix and
> if row or col
Hi there,
Can anyone please tell me if it is possible to limit parameters in nlrq()
to 'upper' and 'lower' bounds as per nls()? If so how??
Many thanks in advance
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
Try using aggregate( )
--
View this message in context:
http://n4.nabble.com/counts-of-elements-in-vector-tp1704036p1710369.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailma
Why does R need the concept of "Vector"? In my opinion, it is a useless and
confusing concept. A vector is simply a special case of a matrix whose row
or column number is equal to 1. When I take submatrix from one matrix and
if row or column number is 1, R will automatically convert it into a v
Thanks a lot for the help guys. That's exactly what I was looking for. I'm
trying to avoid loops, but still don't know what tasks simply require them.
Thanks again!
Cheers,
Jasn
--
View this message in context:
http://n4.nabble.com/a-vectorized-solution-to-some-simple-dataframe-math-tp1692810p1
Gabor, thanks a lot!
I removed everything from the work space but the data frame - and then
DF[is.na(DF)]<-0 has worked!
Thanks a lot!
Dimitri
On Mon, Mar 29, 2010 at 8:45 PM, Gabor Grothendieck
wrote:
> Its going to be pretty hard to do anything useful if you can`t even do
> simple operations li
Hi,
I'm a bit puzzled. I uses exactly the same code in RcppExamples
package to try adding RcppFrame object to RcppResultSet. When running
it gives me segmentation fault problem. I'm using gcc 4.1.2 on redhat
64bit. I'm not sure if this is the cause of the problem. Any advice
would be greatly appre
Its going to be pretty hard to do anything useful if you can`t even do
simple operations like that without overflowing memory but anyways try
this (untested):
write.table(DF, "DF.csv", sep = ",", quote = FALSE)
rm(DF)
DF <- read.csv(pipe("sed s/NA/0/g DF.csv"))
On Mon, Mar 29, 2010 at 8:33 PM, D
To be specific, my data frame is 4000 by 2200.
On Mon, Mar 29, 2010 at 8:33 PM, Dimitri Liakhovitski wrote:
> Just tried it. It's definitely faster - but I get the same error:
> " Reached total allocation of 1535Mb:"
>
> On Mon, Mar 29, 2010 at 8:27 PM, Gabor Grothendieck
> wrote:
>> See if this
Just tried it. It's definitely faster - but I get the same error:
" Reached total allocation of 1535Mb:"
On Mon, Mar 29, 2010 at 8:27 PM, Gabor Grothendieck
wrote:
> See if this works for you:
>
> DF[is.na(DF)] <- 0
>
> On Mon, Mar 29, 2010 at 8:21 PM, Dimitri Liakhovitski
> wrote:
>> Dear R'er
See if this works for you:
DF[is.na(DF)] <- 0
On Mon, Mar 29, 2010 at 8:21 PM, Dimitri Liakhovitski wrote:
> Dear R'ers,
>
> I have a very large data frame (over 4000 rows and 2,500 columns). My
> task is very simple - I have to replace all NAs with a zero. My code
> works fine on smaller data f
Dear R'ers,
I have a very large data frame (over 4000 rows and 2,500 columns). My
task is very simple - I have to replace all NAs with a zero. My code
works fine on smaller data frames - but I have to deal with a huge one
and there are many NAs in each column.
R runs out of memory on me ("Reached
On 29 March 2010 23:20, robbert blonk wrote:
> Dear list,
>
> I try to set a secondary y-axis in a lattice xyplot. This works. However, I
> am unable to set a proper legend/key together with the 2nd y-axis under
> general xyplot procedures. See example below.
>
> The combination of the par.setting
On Mar 29, 2010, at 6:52 PM, Ali Tofigh wrote:
Assume you have a vector of characters x:
x
[1] "a" "b" "a" "d" "d" "c"
I use a function that counts the number of times each string occurs
in x:
sapply(unique(x), function(s) {sum(x == s)})
a b d c
2 1 2 1
Is there a more efficient way
Hi,
My name is shruti,When I was trying to check the package I got this error
"Data sets with usage in documentation object 'data' but not in the code"
Can any one help me with this.
--
View this message in context:
http://n4.nabble.com/Data-sets-with-usage-in-documentation-object-data-but-no
Assume you have a vector of characters x:
> x
[1] "a" "b" "a" "d" "d" "c"
I use a function that counts the number of times each string occurs in x:
> sapply(unique(x), function(s) {sum(x == s)})
a b d c
2 1 2 1
Is there a more efficient way of doing this?
Cheers,
/Ali
Hi,
My Name is shruti.when I was checking a package i got a warning "Data sets
with usage in documentation object 'data' but not in the code" can anyone
help me with this.
Thank you
--
View this message in context:
http://n4.nabble.com/Data-sets-with-usage-in-documentation-object-data-but-not-i
Hi Maxim,
This is the wrong list for this question, please subscribe to and ask
this question on the bioconductor list.
Info on how to subscribe is here:
http://www.bioconductor.org/docs/mailList.html
-steve
On Mon, Mar 29, 2010 at 4:24 PM, Maxim wrote:
> Hi,
>
> I have a question concerning
On Mar 29, 2010, at 4:49 PM, shruti wrote:
Hi,
My name is shruti,When I was trying to check the package I got this
error
"Data sets with usage in documentation object 'data' but not in the
code"
Can any one help me with this.I'm attaching the file for your
reference
http://n4.nabble.com
Hi,
My name is shruti,When I was trying to check the package I got this error
"Data sets with usage in documentation object 'data' but not in the code"
Can any one help me with this.I'm attaching the file for your reference
http://n4.nabble.com/file/n1695616/dataset.jpg
Thank you
--
View this
Hi,
Does R have a Decision Tree functionality akin to 'Precision Tree' by
Palisade? When I search, I end up with 'rpart' but this does not
appear to be what I am looking for.
Kind regards,
Per Bak
Copenhagen
__
R-help@r-project.org mailing list
Hello Dear,
I am trying to generate samples by using Monte Carlo simulation. For
example,
1000 samples, Exponential distribution (f(x), lambda=0.0005, 0<=x<=360)
Is there any package for Monte Carlo or just use random sample generation
function?
Many thank you for your help in advance,
Jin
-
On 29/03/2010 5:14 PM, Matthew Keller wrote:
Hi all,
I would like to run the following from within R:
awk '{$3=$4="";gsub(" ","");print}' myfile > outfile
However, this obviously won't work:
system("awk '{$3=$4="";gsub(" ","");print}' myfile > outfile")
and this won't either:
system("awk '{
Hi R-users:
Can anyone give an example of giving starting values for MCMCglmm?
I can't find any anywhere.
I have 1 random effect (physicians, and there are 50 of them)
and family="ordinal".
How can I specify starting values for my fixed effects? It doesn't seem to have
the option to do so.
Than
Hi all,
I would like to run the following from within R:
awk '{$3=$4="";gsub(" ","");print}' myfile > outfile
However, this obviously won't work:
system("awk '{$3=$4="";gsub(" ","");print}' myfile > outfile")
and this won't either:
system("awk '{$3=$4='';gsub(' ','');print}' myfile > outfile
How do I fit a mixed effects model with two crossed random effects for grouped
time survival data?
I tried coxme with no luck.
Suppose that y is survival time, uncens is censoring indicator, trt is
treatment below.
ran.eff1 and ran.eff2 below are two crossed random effects. This way, it
wou
LS,
How large a dataset can glm fit with a binomial link function? I have a set
of about 100.000 observations and about 8000 explanatory variables (a factor
with 8000 levels).
Is there a way to find out how large datasets R can handle in general?
Thanks in advance,
geelman
On Mon, 29 Mar 2010, Gabor Grothendieck wrote:
On Mon, Mar 29, 2010 at 4:12 PM, Thomas Lumley wrote:
On Sun, 28 Mar 2010, kMan wrote:
This was *very* useful for me when I dealt with a 1.5Gb text file
http://www.csc.fi/sivut/atcsc/arkisto/atcsc3_2007/ohjelmistot_html/R_and_la
rge_data/
Tw
On Sat, Mar 27, 2010 at 4:19 AM, n.via...@libero.it wrote:
> Hi I have a question,
> as im not able to import a csv file which contains a big dataset(100.000
> records) someone knows how many records R can handle without giving problems?
> What im facing when i try to import the file is that R ge
On Mon, Mar 29, 2010 at 4:12 PM, Thomas Lumley wrote:
> On Sun, 28 Mar 2010, kMan wrote:
>
>>> This was *very* useful for me when I dealt with a 1.5Gb text file
>>>
>>> http://www.csc.fi/sivut/atcsc/arkisto/atcsc3_2007/ohjelmistot_html/R_and_la
>>
>> rge_data/
>>
>> Two hours is a *very* long time
On Sun, 28 Mar 2010, Tom La Bone wrote:
>So, am I missing something obvious here or is the
> "survey" package meant only for analyzing survey data once you have it in
> hand?
Yes, basically. The package title is "analysis of complex survey samples", and
the book is "a guide to analysis using R"
Hi,
I have a question concerning the analysis of some affymetrix chips. I
downloaded some of the data from GEO GSE11324 (see below). In doing so I'm
stuck after I identified the probesets with significant changes. I have
problems in assigning probeset specific gene names as well as getting the
gen
Does anyone know of courses to learn R programming in the DC area? I know
there is the conference in Gaithersburg, but I was curious if there was
anything sooner.
Thanks!
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing
On Mon, 29 Mar 2010, serdal wrote:
hi,
i am writing my master thesis and i am dealing with 146474 observations
(panel data), i have just learned the R so i am a beginner!!
i am trying to use the "plm" package and i have a duplication problem;
i have written the following commands to read my da
Would like to thank every one once more for your great help.
I was able to reduce the time from god knows how many hours to about 2 minutes!
Really appreciate it!
Dimitri
On Sat, Mar 27, 2010 at 11:43 AM, Martin Morgan wrote:
> On 03/26/2010 06:40 PM, Dimitri Liakhovitski wrote:
>> My sincere apo
On Sun, 28 Mar 2010, kMan wrote:
This was *very* useful for me when I dealt with a 1.5Gb text file
http://www.csc.fi/sivut/atcsc/arkisto/atcsc3_2007/ohjelmistot_html/R_and_la
rge_data/
Two hours is a *very* long time to transfer a csv file to a db. The author
of the linked article has not docu
Hi
I think
?bxp
( the pars section of it)
is what you are searching for.
x <- rnorm(100)
par(bg = "white")
boxplot(x, col = "white",
notch=T,medcol="black",whiskcol="white",boxfill=TRUE)
HTH
Lukas Schefczyk
--
From: "Frostygoat"
Sent: Monday
Hi Bill,
Without an example dataset it's hard to see exactly what you need to
do. But you can get started by looking at the documentation for the
reshape function (?reshape), and by looking at the reshape package.
The reshape package has an associated web page
(http://had.co.nz/reshape/) with links
On Mar 29, 2010, at 1:53 PM, Thomas Jensen wrote:
Dear R-list,
I have a problem which I think is quite basic, but so far google has
not
helped me.
I have two vectors like this:
vector_1 <- c(Belgium, Spain, Greece, Ireland, Luxembourg,
Netherlands,
Portugal)
vector_2 <- c(Denmark, Lux
On 3/29/2010 1:53 PM, Thomas Jensen wrote:
> Dear R-list,
>
> I have a problem which I think is quite basic, but so far google has not
> helped me.
>
> I have two vectors like this:
>
> vector_1 <- c(Belgium, Spain, Greece, Ireland, Luxembourg, Netherlands,
> Portugal)
>
> vector_2 <- c(Denmark
Hi Thomas,
%in% does the trick:
vector_1 <- c("Belgium", "Spain", "Greece", "Ireland", "Luxembourg",
"Netherlands","Portugal")
vector_2 <- c("Denmark", "Luxembourg")
vector_1[!(vector_1 %in% vector_2)]
HTH,
Stephan
Thomas Jensen schrieb:
Dear R-list,
I have a problem which I think is quit
?setdiff
Bert Gunter
Genentech Nonclinical Biostatistics
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Thomas Jensen
Sent: Monday, March 29, 2010 10:53 AM
To: r-help@r-project.org
Subject: [R] Finding common an unique elements
Try this:
setdiff(vector_1, vector_2)
On Mon, Mar 29, 2010 at 2:53 PM, Thomas Jensen
wrote:
> Dear R-list,
>
> I have a problem which I think is quite basic, but so far google has not
> helped me.
>
> I have two vectors like this:
>
> vector_1 <- c(Belgium, Spain, Greece, Ireland, Luxembourg, Ne
Hi!
I am using geeglm to fit a Poisson model to a timeseries of count data as
follows. Since there are no clusters I use 73 values of 1 for the ids. The
problem I have is that I am getting standard errors of zero for the
parameters. What am I doing wrong?
Thanks, Michelle
> N_Base
[1] 95 8
Hi, I'm looking for a way to get white boxplots on a black
background. The following is insufficient because although the box is
white, I can't figure out how to change the whisker color to white.
x <- rnorm(100)
par(bg = "black")
boxplot(x)
boxplot(x, col = "white", notch=T)
Is there no way to
Dear R-list,
I have a problem which I think is quite basic, but so far google has not
helped me.
I have two vectors like this:
vector_1 <- c(Belgium, Spain, Greece, Ireland, Luxembourg, Netherlands,
Portugal)
vector_2 <- c(Denmark, Luxembourg)
I would like to find the elements in vector_1 that
I have a data frame that I created using read.table on a csv spreadsheet.
The data look like the following:
Steer.ID stocker.trt Finish.trt Date Days Wt ..
Steer.Id, stocker.trt, Finish.trt are factors-- Date, Days, Wt are data
that are repeated 23 times (wide format).
I want t
I've been calling R from shell using the following (as example) ...
#!/bin/bash
for dir in $(ls *.txt); do
R CMD BATCH script.R
done
Muhammad
Tsjerk Wassenaar wrote:
Hi,
That seems quite neat. To make it a bit more flexible, and maybe do
some argument acrobatics with bash, you could change
hi,
i am writing my master thesis and i am dealing with 146474 observations
(panel data), i have just learned the R so i am a beginner!!
i am trying to use the "plm" package and i have a duplication problem;
i have written the following commands to read my data and create my model
>dsn<-plm.data
Okay, I'll try again with .txt extension. Thanks David.
On Mon, Mar 29, 2010 at 12:50 PM, David Winsemius wrote:
> I would have made it through the mail-server had you given it an extension
> of .txt but not so with the .rsh extension.
>
>
>
> On Mar 29, 2010, at 12:31 PM, Jason E. Aten wrote:
>
that was way too simple (and completely dumb on my part!)
Thanks! =)
--
View this message in context:
http://n4.nabble.com/too-many-arguments-error-tp1695431p1695461.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.or
Typo: "**Paul_i** Exclusion Principle"
Bert Gunter
Genentech Nonclinical Biostatistics
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and pro
seq (0, 183, 365, 549, 732, 915, 1095)
should be
c(0, 183, 365, 549, 732, 915, 1095)
see ?seq and ?c for why.
Sarah
On Mon, Mar 29, 2010 at 2:11 PM, Euphoria wrote:
>
> Hi all!
>
> I keep getting the "too many arguments" error when I insert the xaxt="n",
> axis(1, at=seq (0, 183, 365, 549, 732,
Hi all!
I keep getting the "too many arguments" error when I insert the xaxt="n",
axis(1, at=seq (0, 183, 365, 549, 732, 915, 1095)) statement in the code
below for a survival plot.
How can I go around this issue? Is there another way for me to change the
x-axis?
Also, I would prefer the y-a
Tom:
You asked whether two groups have the same underlying population 1st and 2nd
moments. The answer is: no they don't. Nothing is ever exactly the same as
anything else (indeed, I think this is the Paul Exclusion Principle ;-) ).
So quoting Jim Holtman: "What is the question?" That certainly
Hi,
That seems quite neat. To make it a bit more flexible, and maybe do
some argument acrobatics with bash, you could change the first few
lines to something like
#!/bin/bash
exec R --vanilla -q --slave -e "source(file=pipe(\"sed -n
/^##RSTART/,\$p $0\"))" --args $@
##RSTART
# Script here
Ch
Xiang Gao-2 wrote:
>
> How can we prove that the treatment did not make any difference in the
> amount of protein A. In another word, Pre- and post- are the same.
>
There is no way to "prove" that there is no difference. While you could use
some alternative hypothesis, people rarely understand
I know what "get a bigger sample means". I have no clue what "ask a more
statistically meaningful question" means. Can you elaborate a bit?
Tom
--
View this message in context:
http://n4.nabble.com/Ellipse-that-Contains-95-of-the-Observed-Data-tp1694538p1695357.html
Sent from the R help mailing
arnaud chozo wrote:
>
> When I modify the file "functions.R", I'd like R to take into account the
> changes and to reload the file functions.R when I run main.R
>
source again, and you old definitions will be overwritten.
Dieter
--
View this message in context:
http://n4.nabble.com/how-to
Easy. See below.
Bert Gunter
Genentech Nonclinical Biostatistics
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Tom La Bone
Sent: Monday, March 29, 2010 6:56 AM
To: r-help@r-project.org
Subject: Re: [R] Ellipse that Contains 95%
Thanks Gabor. I didn't realize you could. Here is the scriptdemo.rsh file
as a text attachment, in case the line wraps made it hard to read/use.
- Jason
On Mon, Mar 29, 2010 at 11:19 AM, Gabor Grothendieck <
ggrothendi...@gmail.com> wrote:
> Thanks.
>
> You might want to repost it as a text at
Hi,
I'm afraid I really don't have time to enter into a dialogue :-{ but this
fragment from a function of mine might help...
--
outfile <- sub("Rd$", "html", hfile, ignore.case=TRUE) # name of
corresponding html
out.mod <- file.info(outfile)[,"mtime"] # if html
Thanks.
You might want to repost it as a text attachment since many of the
lines wrapped around.
Another more permanent possibility would be to put it on the R wiki at
http://rwiki.sciviews.org/doku.php
Note that the gsubfn package has a facility for quasi-perl type string
interpolation as well.
Hi Jason,
Thanks for sharing your solution(s).
For other alternatives for running R scripts, you (or your colleague)
might want to look into:
* Rscript (comes installed with R (these days))
* littler (http://code.google.com/p/littler/)
Also, there are some libraries that deal with parsing com
There is a discussion of excel dates in R News 4/1.
On Mon, Mar 29, 2010 at 11:47 AM, anna wrote:
>
> Hi Joshua, thank you this worked pretty well. I don't understand all the
> details of the dates on excel and R so sorry for not participating more on
> my own post.
>
> -
> Anna Lippel
> --
>
Hi Joshua, thank you this worked pretty well. I don't understand all the
details of the dates on excel and R so sorry for not participating more on
my own post.
-
Anna Lippel
--
View this message in context:
http://n4.nabble.com/Convert-number-to-Date-tp1691251p1695267.html
Sent from the R
On Mar 29, 2010, at 9:29 AM, meghana kulkarni wrote:
Hello all,
This is Meghana.
Well, I have some analysis output in 3 dimensional array form.
for example:
, , type1
A B C D
1 2 3 4
1 2 3 4
, , type2
?write.table
> arr <- array(1:27, c(3,3,3))
> write.table(arr[, , 1])
Dear R users,
A colleague of mine asked me how to write a script (an executable text file
containing R code) in R. After I showed
him, he said that after extensive searching of the R archives, he had not
found anything like these techniques.
He suggested that I share these methods to enable other
Hi all,
I'm trying to plot individual points on a surface plot. I want the
points to be hidden by the surface if they are below it.
AFAIK points() doesn't fulfill this requirement, so on the following
plot the point (50,50,0) is visible and it shouldn't:
x <- 1:100
y <- 1:100
pn <- persp(x, y, o
Hello,
arnaud chozo wrote:
Hi all,
I have a main file main.R in which I include some other R files. For
example, in main.R I have: source("functions.R").
When I modify the file "functions.R", I'd like R to take into account the
changes and to reload the file functions.R when I run main.R
Is it
for a picture of the bagplot, try going to
http://www.statmethods.net/graphs/boxplot.html
--
View this message in context:
http://n4.nabble.com/Ellipse-that-Contains-95-of-the-Observed-Data-tp1694538p1695236.html
Sent from the R help mailing list archive at Nabble.com.
_
Hi all,
I have a main file main.R in which I include some other R files. For
example, in main.R I have: source("functions.R").
When I modify the file "functions.R", I'd like R to take into account the
changes and to reload the file functions.R when I run main.R
Is it possible?
Thanks,
Arnaud
Dear R-helper,
Please suggest some methods for my question below.
We measured the amount of protein A in patient blood in pre-treatment and
post-treatment condition from 32 patients.
Pre-treatment Post-treatment
Pat1 25
Hello all,
I am trying to create a grid of large number of points 4096*4096, on
processing the data I am writing it into a file.
phi <- 0.5
N <- 4096
mu <- 90
sim<-grf(N*N,grid="reg",cov.model="spherical",cov.pars=c(1,phi),method="RF")
sim$data <- (sim$data - mean(sim$data)) / sd(sim$data)
loc
I'm running a multinomial logit in R using the Zelig packages. According to
str(trade962a), my dependent variable is a factor with three levels. When I run
the multinomial logit I get an error message. However, when I run 'model=logit'
it works fine. any ideas on whats wrong?
## MULTINOMIAL L
Dear Marta,
I did it in Matlab, and fiddled around with R code until I had *almost* the
same result. The "almost" is probably due to R handling the picture values
(ranging from 0 to 1) differently than Matlab (ranging from 0 to 255), and
simply multiplying the R picture values by 255 did NOT re
Hi
r-help-boun...@r-project.org napsal dne 29.03.2010 16:13:31:
> Hi,
> Your question is really vague.
> What about legend(lty=, pch=)?
Well, it is probably not mentioned explicitly in ?legend but he is
probably seeking
legend(..., lty=c(1,NA, 2,3), pch=c(NA, 16, NA,NA))
Regards
Petr
> Ivan
Val,
Type "combine two data sets" (text you wrote in your post) into
www.rseek.org. The first two links are: "Quick-R: Merge" and "Merging data:
A tutorial". Isn't it quicker for you to use rseek, rather than the time it
takes to write a post and wait for a reply ? Don't you also get more
de
1 - 100 of 134 matches
Mail list logo