On Dec 2, 2011, at 11:20 PM, Worik R wrote:
Duh! Silly me! But my confusion persits: What is the regression
being
done? See below
Please note that your "df" and "M" are undoubtedly different
objects by now:
> M <- matrix(runif(5*20), nrow=20)
> colnames(M) <- c('a', 'b', 'c',
Duh! Silly me! But my confusion persits: What is the regression being
done? See below
On Sat, Dec 3, 2011 at 5:10 PM, R. Michael Weylandt <
michael.weyla...@gmail.com> wrote:
> In your code by supplying a vector M[,"e"] you are regressing "e"
> against all the variables provided in the da
In your code by supplying a vector M[,"e"] you are regressing "e"
against all the variables provided in the data argument, including "e"
itself -- this gives the very strange regression coefficients you
observe. R has no way to know that that's somehow related to the "e"
it sees in the data argumen
>
> Use `lm` the way it is designed to be used, with a data argument:
>
> > l2 <- lm(e~. , data=as.data.frame(M))
> > summary(l2)
>
> Call:
> lm(formula = e ~ ., data = as.data.frame(M))
>
>
And what is the regression being done in this case? How are the
independent variables used?
It looks like
Hello,
I want to create side-by-side maps of similar attribute data in two
different cities using a single legend.
To simply display side-by-side census block group boundary
(non-thematic) maps for Minneapolis & Cleveland I do the following:
library(rgdal)
library(sp)
Minneapolis=readOGR(".
On 11-12-02 8:38 PM, Michael wrote:
Hi all,
Could you please help me?
I am having the following weird problem when debugging R programs
using "browser()":
In my function, I've inserted a "browser()" in front of Step 1. My
function has 3 steps and at the end of each step, it will print out
the
Hi all,
Could you please help me?
I am having the following weird problem when debugging R programs
using "browser()":
In my function, I've inserted a "browser()" in front of Step 1. My
function has 3 steps and at the end of each step, it will print out
the message "Step i is done"...
However,
On 2011-12-02 13:03, Santosh wrote:
Dear Experts,
When using "plot" and "polygon", I can change the density and angle of the
shaded area lines when plotting is done in regular scale. It does not seem
to work in 'log' scale. Any suggestions would be highly appreciated!
below is an example:
plot
Hi,
For imputation using randomForest package, check
?rfImpute
Weidong
On Fri, Dec 2, 2011 at 6:00 PM, Peter Langfelder
wrote:
> On Fri, Dec 2, 2011 at 2:16 PM, khlam wrote:
>> So I have a very big matrix of about 900 by 400 and there are a couple of NA
>> in the list. I have used the followi
On 12/02/2011 09:46 PM, Berry, David I. wrote:
> Thanks for the reply and suggestions. I've tried the RpgSQL drivers and
> the results are pretty similar in terms of performance.
>
> The ~1.5M records I'm trying to read into R are being extracted from a
> table with ~300M rows (and ~60 columns) th
On Fri, Dec 2, 2011 at 2:16 PM, khlam wrote:
> So I have a very big matrix of about 900 by 400 and there are a couple of NA
> in the list. I have used the following functions to impute the missing data
>
> data(pc)
> pc.na<-pc
> pc.roughfix <- na.roughfix(pc.na)
> pc.narf <- randomForest(pc.na, na
This problem comes up so frequently that I have made
options(useFancyQuotes=FALSE) by default in my knitr package:
http://yihui.github.com/knitr/
You can also use options(useFancyQuotes='TeX').
Regards,
Yihui
--
Yihui Xie
Phone: 515-294-2465 Web: http://yihui.name
Department of Statistics, Iowa
Hi,
I have run into a problem writing data using RODBC. The dataframe i
have read in from access includes some NAs. I have put the data into
an xts object, manipulated the data, and would now like to append two
columns of the manipulated data to the original table in access.
I cannot append the d
So I have a very big matrix of about 900 by 400 and there are a couple of NA
in the list. I have used the following functions to impute the missing data
data(pc)
pc.na<-pc
pc.roughfix <- na.roughfix(pc.na)
pc.narf <- randomForest(pc.na, na.action=na.roughfix)
yet it does not replace the NA in th
Thank you, I copied the data from the R environment, but it came out wrong. You
understood exactly what I wanted, and your solution is admirable: I clearly
need to address the naming convention. Thanks for your help.
--- On Fri, 2/12/11, Jean V Adams wrote:
From: Jean V Adams
Subject: Re: [R
I have the following Sweave file which gets sweaved correctly.
<<>>=
m <- lm(y1 ~x1, anscombe)
summary(m)
@
I include the sweaved .tex file into another .tex file via include.
When I use a single umlaut in the .snw file a warning occurs.
As a result part of the summary output is not contained in
On Dec 2, 2011, at 4:06 PM, Jack Tanner wrote:
David Winsemius comcast.net> writes:
sapply(tz, function(ttt) as.POSIXct(x=x, tz=ttt,
origin="1960-01-01"),simplify=FALSE)
Sure, there's no end of workarounds. It would just be consistent to
treat both
the x and the tz arguments as vectors.
David Winsemius comcast.net> writes:
> sapply(tz, function(ttt) as.POSIXct(x=x, tz=ttt,
> origin="1960-01-01"),simplify=FALSE)
Sure, there's no end of workarounds. It would just be consistent to treat both
the x and the tz arguments as vectors.
__
R
Dear Experts,
When using "plot" and "polygon", I can change the density and angle of the
shaded area lines when plotting is done in regular scale. It does not seem
to work in 'log' scale. Any suggestions would be highly appreciated!
below is an example:
plot(1:10,c(1:10)^2*20,log="y")
polygon(c(
On Dec 2, 2011, at 2:28 PM, Jack Tanner wrote:
x <- 1472562988 + 1:10; tz <- rep("EST",10)
# Case 1: Works as documented
ct <- as.POSIXct(x, tz=tz[1], origin="1960-01-01")
# Case 2: Fails
ct <- as.POSIXct(x, tz=tz, origin="1960-01-01")
sapply(tz, function(ttt) as.POSIXct(x=x, tz=ttt,
orig
Here's a slight modification that is even faster if speed is a consideration:
sapply(Version1_, `[[`, "First")
The thought process is to go through the list "Version1_" and apply
the operation `[[` to each element individually. This requires a
second operator (here the element name "First") which
With similar data, since you didn't include reproducible example of your own:
> results <- matrix(c(53, 55, 37, 83), nrow=1)
> colnames(results) <- letters[1:4]
> results
a b c d
[1,] 53 55 37 83
> order(results)
[1] 3 1 2 4
> colnames(results)[order(results)]
[1] "c" "a" "b" "d"
On Fri
names(results)[order(results)]
Michael
On Fri, Dec 2, 2011 at 2:45 PM, Martin Bauer wrote:
> Hello,
>
>
> I have a matrix results with dimension 1x9 double matrix
>
> XLB XLE XLF XLI
> 1 53.3089 55.77923 37.64458 83.08646
>
> I'm trying to order
Here is one way of doing it:
>compMat2 <- function(A, B) { # rows of B present in A
+B0 <- B[!duplicated(B), ]
+na <- nrow(A); nb <- nrow(B0)
+AB <- rbind(A, B0)
+ab <- duplicated(AB)[(na+1):(na+nb)]
+return(sum(ab))
+}
>
>
>set.seed(8237)
>
Hello,
I have a matrix results with dimension 1x9 double matrix
XLB XLE XLF XLI
1 53.3089 55.77923 37.64458 83.08646
I'm trying to order this matrix
> print(order(results))
[1] 3 1 2 4
how can the function order return the columnname XLF
Great, this worked the fastest of all the suggestions. Cheers,
Josh
From: Michael Weylandt [via R] [mailto:ml-node+s789695n414494...@n4.nabble.com]
Sent: Thursday, December 01, 2011 8:11 PM
To: ROLL Josh F
Subject: Re: Summarizing elements of a list
Similarly, t
thanks Michael.
I played with your suggestion to get the output in the format I wanted, and
I found the following that works fine:
sub<-d[, which(colnames(d) %in% v) ]
Aurelien
2011/12/2 R. Michael Weylandt <
michael.weyla...@gmail.com>
> How about this?
>
> d[, v[v %in% colnames(d)]]
>
> Mich
Thank you for the help, I knew it could be done with a member of the apply
family. I struggle with apply stuff though, its not always intuitive for me
with these functions.
Cheers,
JR
From: Sarah Goslee [via R] [mailto:ml-node+s789695n414453...@n4.nabble.com]
S
Michael Kao gmail.com> writes:
>
Well, taking a second look, I'd say it depends on the exact formulation.
In the applications I have in mind, I would like to count each occurrence
in B only once. Perhaps the OP never thought about duplicates in B
Hans Werner
>
> Here is an example based on t
x <- 1472562988 + 1:10; tz <- rep("EST",10)
# Case 1: Works as documented
ct <- as.POSIXct(x, tz=tz[1], origin="1960-01-01")
# Case 2: Fails
ct <- as.POSIXct(x, tz=tz, origin="1960-01-01")
If case 2 worked, it'd be a little easier to process paired (time, time zone)
vectors from different time z
Michael Kao gmail.com> writes:
>
Your solution is fast, but not completely correct, because you are also
counting possible duplicates within the second matrix. The 'refitted'
function could look as follows:
compMat2 <- function(A, B) { # rows of B present in A
B0 <- B[!duplicated(
Hi all,
I was wondering if any one had scripts that they could share for
capturing the current version of R packages used for a project. I'm
interested in creating a project local library so that you're safe if
someone (e.g. the ggplot2 author) updates a package you're relying on
and breaks your c
The error message is pretty explicit: your problem is taht one of your
inputs has NA (missing value) in it and the GenMatch() function is not
prepared to handle the. You can find which one by running:
any(is.na(Tr))
any(is.na(X.binarynp)
any(is.na(BalanceMatrix.binarynp))
and then use View() on
I am trying to compile R using Solaris Studio, but it keeps trying to use the
GNU compiler! I've tried editing all the Makeconf files I can find, but
configure keeps changing them back! I tried to rename the GNU directory so
it could not find gcc, but then I got a missing lib error.
How does one
Hi,
The following code should work:
fields <- dbListFields(con, db.table.name)
reordered.names <- names(df)[match(fields, names(df))]
df <- df[ ,reordered.names]
But, you might want to try using the function 'dbWriteTable2' in the
'caroline' package. (In fact the three lines above have been co
The references are here: http://manning.com/kabacoff/excerpt_references.pdf
(they will be included on the next printing too, got omitted by mistake)
Regards,
Pablo
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
http
You never create a variable called "Mat2002273" or "Mat2002361" so you
can't ask R to loop over all the values between them.
If I were you, I'd code something like this:
lf <- list.files()
# PUT IN SOME CODE TO REMOVE FILES YOU DON'T WANT TO USE
pv <- vector("numeric", length(lf))
for(i in lf)
On 2/12/2011 2:48 p.m., David Winsemius wrote:
On Dec 2, 2011, at 4:20 AM, oluwole oyebamiji wrote:
Hi all,
I have matrix A of 67420 by 2 and another matrix B of 59199 by 2.
I would like to find the number of rows of matrix B that I can find
in matrix A (rows that are common to both matr
Hi,
I try to build a loop difficultly.
I have in a folder called Matrices several files (.csv) called Mat2002273,
Mat2002274 to Mat2002361.
I want to calculate for each file the mean of the column called Pixelvalues.
I try this code but as result, I have this message: Mat2002273 not found
>
The problem: There are no a priori groupings to run a classification on
My solution:
This is a non-R code question, so I appreciate any thoughts. I have
used pam in the cluster package proceeded by sillohouette to find the
optimum number of clusters on scaled and centered data. I have follo
How about this?
d[, v[v %in% colnames(d)]]
Michael
On Dec 2, 2011, at 12:01 PM, Aurélien PHILIPPOT
wrote:
> Hi Paul and Jim,
> Thanks for your messages.
>
> I just wanted R to give me the columns of my data frame d, whose names
> appear in v. I do not care about the names of v that are not i
Hi Bert,
Since you opened the door ...
On Fri, Dec 2, 2011 at 10:06 AM, Bert Gunter wrote:
> ?ordered
> ?C
> ?contr.poly
>
> If you don't know what polynomial contrasts are, consult any good
> linear models text. MASS has a good, though a bit terse, section on
> this.
Do you have a "favorite" l
Thanks for this!
Axel.
On Thu, Dec 1, 2011 at 11:29 AM, Liaw, Andy wrote:
> The first version of the package was created by re-writing the main
> program in the original Fortran as C, and calls other Fortran subroutines
> that were mostly untouched, so dynamic memory allocation can be done.
>
On 02.12.2011 17:41, robgriffin247 wrote:
Hi,
I am trying to put larger axis labels on my graphs (using cex.axis and
cex.label) but when I do this the top of the text on the Y axis goes outside
of the window which you can see in this picture
-http://twitter.com/#!/robgriffin247/status/14264288
I guess the numbers your report are what your OS shows you?
R runs garbage collection (which can be manually triggred by gc()) after
certain fuzzy rules. So what you report below is not always the current
required memory but what was allocated and not yet garbnage collected.
See ?object.size
Hi Paul and Jim,
Thanks for your messages.
I just wanted R to give me the columns of my data frame d, whose names
appear in v. I do not care about the names of v that are not in d. In
addition, every time, there will be at least one element of v that has a
corresponding column in d, for sure, so I
Hi,
I am trying to put larger axis labels on my graphs (using cex.axis and
cex.label) but when I do this the top of the text on the Y axis goes outside
of the window which you can see in this picture
-http://twitter.com/#!/robgriffin247/status/142642881436450816/photo/1 - (if
you click on the pict
Dear R community,
I am still struggling a bit on how R does memory allocation and how to optimize my code to minimize
working memory load. Simon (thanks!) and others gave me a hint to use the command "gc()"
to clean up memory which works quite nice but appears to me to be more like a "fix" to a
Hello everbody,
I am new to this mailing list and hope to find some help.
I'm trying to get into the spatstat package and encountered two problems. First
a graphical one:
There is an example dataset called "finpines" which has several marks
(http://www.oga-lab.net/RGM2/func.php?rd_id=spatstat:f
Hi Jan,
You are likely to simply use
?predict
(e.g: predict.rpart)
Are you using a classification or a regression tree?
Contact
Details:---
Contact me: tal.gal...@gmail.com | 972-52-7275845
Read me: www.talgalili.com (Hebrew) |
Thank you both Bert and David, for the quick reply.
I will look further into this.
With regards,
Tal
Contact
Details:---
Contact me: tal.gal...@gmail.com | 972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.c
dear all,
i want to keep in my data file the results of terminal nodes (groups) after
CART analysis for performing other statisticals analysis by this groups.
can you help me please?
thanks.
jan.
[[alternative HTML version deleted]]
__
R
There are a million ways to do this, probably.
brks <- c(1,sort(sample(seq_len(99),3)),100) ## 4 random groups
and then use brks as the breaks parameter in cut() with include.lowest = TRUE
?cut
-- Bert
On Fri, Dec 2, 2011 at 7:09 AM, statfan wrote:
> say n = 100
> I want to partition this i
On Dec 2, 2011, at 10:09 AM, statfan wrote:
say n = 100
I want to partition this into 4 random groups wheren n1 + n2 + n3 +
n4 = n
and ni is the number of elements in group i.
Try assigning with a sample() from:
unlist(mapply(rep, c(1:4), each=c(n1,n2,n3,n4)))
--
David Winsemius, MD
We
A simple way to determine if it is NOT is to see if the mean (the single
parameter of a poisson: lambda) and variance are the same.
This really has nothing to do with R (other than the data source), and since
it is homework, you will likely get no further help here.
Good luck.
RToss wrote
>
>
On Dec 2, 2011, at 9:51 AM, Tal Galili wrote:
Hello dear all,
I am unable to understand why when I run the following three lines:
set.seed(4254)
a <- data.frame(y = rnorm(40), x=ordered(sample(1:5, 40, T)))
summary(lm(y ~ x, a))
The output I get includes factor levels which are not releva
say n = 100
I want to partition this into 4 random groups wheren n1 + n2 + n3 + n4 = n
and ni is the number of elements in group i.
Thank you for you help
--
View this message in context:
http://r.789695.n4.nabble.com/breaking-up-n-object-into-random-groups-tp4147476p4147476.html
Sent from the R
Hi!
I am sitting with a school assignment, but I got stuck on this one.
I am suppose to test if my data is Poisson-distributed.
The data I´m using is the studie "Bids", found in the Ecdat-package, and the
variable of interest is the dependent "numbids".
How do I practically perform a test for this
Dear R Users,
I am a novice learner of R software. I am working with Genetic Matching
- GenMatch(), but I am getting an Error message as follows:
Error in GenMatch(Tr = Tr, X = X.binarynp, BalanceMatrix =
BalanceMatrix.binarynp, :
GenMatch(): input includes NAs
Could you please suggest me
On 01/12/2011 17:01, "Gabor Grothendieck" wrote:
>On Thu, Dec 1, 2011 at 10:02 AM, Berry, David I. wrote:
>> Hi List
>>
>> Apologies if this isn't the correct place for this query (I've tried a
>>search of the mail archives but not had much joy).
>>
>> I'm running R (2.14.0) on a Mac (OSX v 10.
Dear R-users
I want to save a list with characters a point plot and a Venn diagram in
a single pdf page.
I am successful to do this when I use a character list and two point plots.
However when I try to replace the first point plots with my Venn diagram
(built with Vennerable package, compute
Maybe should have explicitly said:
> C(ordered(1:5))
[1] 1 2 3 4 5
attr(,"contrasts")
ordered
contr.poly
Levels: 1 < 2 < 3 < 4 < 5
-- Bert
On Fri, Dec 2, 2011 at 7:06 AM, Bert Gunter wrote:
> ?ordered
> ?C
> ?contr.poly
>
> If you don't know what polynomial contrasts are, consult any good
>
?ordered
?C
?contr.poly
If you don't know what polynomial contrasts are, consult any good
linear models text. MASS has a good, though a bit terse, section on
this.
-- Bert
On Fri, Dec 2, 2011 at 6:51 AM, Tal Galili wrote:
> Hello dear all,
>
> I am unable to understand why when I run the follow
Dear R community,
I am trying to understand how the ward linkage works from a quantitative point
of view.
To test it I have devised a simple 3-members set:
G = c(0,2,10)
The distances between all couples are:
d(0,2) = 2
d(0,10) = 10
d(2,10) = 8
The smallest distan
Hello dear all,
I am unable to understand why when I run the following three lines:
set.seed(4254)
> a <- data.frame(y = rnorm(40), x=ordered(sample(1:5, 40, T)))
> summary(lm(y ~ x, a))
The output I get includes factor levels which are not relevant to what I am
actually using:
Call:
> lm(form
It's easier for folks to help you if you put your example data in a format
that can be readily read in R. See, for example, the dput() function,
which you can use to provide us with something like this:
DF <- structure(list(NAME = c("Control_1", "Control_2", "Control_1",
"Control_3", "MM0289~R
You've been given a workable solution already, but here's a one-liner:
> x <- c('sta_+1+0_field2ndtry_$01.cfg' ,
> 'sta_+B+0_field2ndtry_$01.cfg' , 'sta_+1+0_field2ndtry_$01.cfg' ,
> 'sta_+9+0_field2ndtry_$01.cfg')
> sapply(1:length(x), function(i)gsub("\\+(.*)\\+.", paste("\\+\\
On Dec 2, 2011, at 3:55 AM, lincoln wrote:
Thanks.
Anyway, it is not homework and I was not told to do that. My
question has
not been answered yet, I'll try to reformulate it:
Does it make (statistical) sense to resample with replacement in this
situation to get an estimate of the CIs? In ca
On 12/01/2011 08:00 PM, Ben quant wrote:
The data I am using is the last file called l_yx.RData at this link (the
second file contains the plots from earlier):
http://scientia.crescat.net/static/ben/
The logistic regression model you are fitting assumes a linear
relationship between x and the
On Dec 2, 2011, at 4:20 AM, oluwole oyebamiji wrote:
Hi all,
I have matrix A of 67420 by 2 and another matrix B of 59199 by
2. I would like to find the number of rows of matrix B that I can
find in matrix A (rows that are common to both matrices with or
without sorting).
I have trie
It means you also need to install SparseM on which quantreg depends. This can
be done in exactly the same way, either by direct download using
install.packages() or local install.
Michael
On Dec 2, 2011, at 6:30 AM, narendarreddy kalam
wrote:
> Hi all,
> my os is windows 7 and R version i
You are too good :)
Thanks a lot have a nice weekend
B.R
Alex
From: jim holtman
Cc: "R-help@r-project.org"
Sent: Friday, December 2, 2011 1:51 PM
Subject: Re: [R] find and replace string
try this:
> x <- c('sta_+1+0_field2ndtry_$01.cfg'
+ , 'sta_
Dear R Users,
I am a novice learner of R software. I am working with Genetic Matching
- GenMatch(), but I am getting an Error message as follows:
Error in GenMatch(Tr = Tr, X = X.binarynp, BalanceMatrix =
BalanceMatrix.binarynp, :
GenMatch(): input includes NAs
Could you please suggest me
If the length of the fists part is constant (the "sta_+1+" part) the
you can use substr()
On 2 December 2011 13:30, Alaios wrote:
>
Dear all,
> I would like to search in a string for the second occurrence of a symbol and
> replace the symbol after it
>
> For example my strings look like
>
>
Hi all,
my os is windows 7 and R version is 2.14.and i used the qunatreg zip
file(binary version for windows) to install.
--
View this message in context:
http://r.789695.n4.nabble.com/about-quantreg-package-loading-tp4146366p4146390.html
Sent from the R help mailing list archive at Nabble.com.
Hi all,
i have installed the quantreg Package using the install packages from local
zip fiels option.
then i got the following message
utils:::menuInstallLocal()
package ‘quantreg’ successfully unpacked and MD5 sums checked
is that mean quantreg package got installed on my machine??
if so why i am
Hi all,
I am trying to install the Rmpi package in R and, while the installation
itself works, it breaks down when trying to load the library. I think it
has something to do with shared vs static loading of helper libraries,
or the order in which shared libraries are loaded, but I am not sure.
R
Hi all,
I have matrix A of 67420 by 2 and another matrix B of 59199 by 2. I would
like to find the number of rows of matrix B that I can find in matrix A (rows
that are common to both matrices with or without sorting).
I have tried the "intersection" and "is.element" functions in R but it on
try this:
> x <- c('sta_+1+0_field2ndtry_$01.cfg'
+ , 'sta_+1+0_field2ndtry_$01.cfg'
+ , 'sta_+1-0_field2ndtry_$01.cfg'
+ , 'sta_+1+0_field2ndtry_$01.cfg'
+ )
> # find matching fields
> values <- grep("[^+]*\\+[^+]*\\+0", x, value = TRUE)
> # split into two piec
Dear all,
I would like to search in a string for the second occurrence of a symbol and
replace the symbol after it
For example my strings look like
sta_+1+0_field2ndtry_$01.cfg
I want to find the digit that comes after the second +, in that case is zero
and then over a loop create the strin
Vytautas Rakeviius writes:
> But still I have question about results interpretation. In the end I
> want to construct prediction function in form:
> Y=a1x1+a2x2
The predict() function does the prediction for you. If you want to
construct the prediction _equation_, you can extract the coefficien
Depends on how you want to 'check'. I usually use 'View' to see if the data
looks OK. You could write some more code to check the 'reasonableness' of the
data. It sounds like you have to learn some ways of 'debugging' your code.
Checking your data depends on what the criteria is for determinin
Hi!
I would just like to have a way to check if my functions are working ok.
If the subset I am extracting is ok (both coordinates and dataset).
The files are nectdf format that I import into R (I only import a
small geographic subset).
Is there another software that will allow me to do this just
On 12/02/2011 07:20 AM, Aurélien PHILIPPOT wrote:
> Dear R-users,
> -I am new to R, and I am struggling with the following problem.
>
> -I am repeating the following operations hundreds of times, within a loop:
> I want to subset a data frame by columns. I am interested in the columns
> names that
?try
If you know that you might have a problem with undefined columns, or whatever,
then trap the error with 'try' so your program can recover. You could also
validate the data that you are going to use before entering the loop; standard
defensive programming - errors are always going to happe
What do you want to do with it after you export; that will probably define what
the data format would look like. Why would you want each dimension separately?
How would you correlate them later? Is it really 3 dimensions, or is your
data just three columns where each row is long, lat and obse
What is the best way to export 1 array??
the array i am trying to export has 3 dimensions (long,lat,observations)
how can i export each dimension independently?
e.g. one csv file with only the long
__
R-help@r-project.org mailing list
https://stat.ethz
Thanks.
Anyway, it is not homework and I was not told to do that. My question has
not been answered yet, I'll try to reformulate it:
Does it make (statistical) sense to resample with replacement in this
situation to get an estimate of the CIs? In case it does, how could I do it
in R?
Some further
William and David, thanks for your help.
The contrasts option was indeed what I was looking for but didn't find.
andi
On 01.12.2011 20:56, David Winsemius wrote:
On Dec 1, 2011, at 1:00 PM, William Dunlap wrote:
Terry will correct me if I'm wrong, but I don't think the
answer to this questio
Dear R-users,
-I am new to R, and I am struggling with the following problem.
-I am repeating the following operations hundreds of times, within a loop:
I want to subset a data frame by columns. I am interested in the columns
names that are given by the rows of another data frame that was built i
Hello,
you can fetch the column names of a table with dbListFields and then
reorder or rename the data frame according to those.
If you want more specific help, provide an example (RSQLite would be a
good choice as database engine to make it easily reproducible for others).
Best regards,
A
Hi Sachin,
In this mail there is not enough context to provide you with advice.
Please read the posting guide:
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
regards,
Paul
On 12/02/2011 05:09 AM, S
René,
Yes, to fit a re-parameterized logistic model I think you'd have to code the
whole enchilada yourself, not relying on glm (but not nls() as nls() deals with
least squares minimization whereas here we want to minimize a minus log
binomial likelihood).
I did that and have the re-parameteri
93 matches
Mail list logo