When I try to install Rmpi, it has a dependency rsprng which, as its
description says: Provides interface to SPRNG 2.0 APIs
I installed it on Mepis by installing the appropriate debs with
minimal drama. In the rpm world of Fedora, no such luxury.
Installing rsprng fails because:
checking sprn
I can't believe I'm saying this, but I think Peter is being a bit harsh on SAS.
I prefer Greg Snow's analogy (in the fortune collection): If SPSS (or SAS) and
R were vehicles, SPSS would be the bus, going on fixed routes and efficiently
carrying lots of people to standard places, whereas R is
Dear all,
I have a data.frame with a column like the x shown below
myDF<-data.frame(cbind(x=c("[[1, 0, 0], [0, 1]]",
"[[1, 1, 0], [0, 1]]","[[1, 0, 0], [1, 1]]",
"[[0, 0, 1], [0, 1]]")))
> myDF
x
1 [[1, 0, 0], [0, 1]]
2 [[1, 1, 0], [0, 1]]
3 [[1, 0, 0], [1, 1]]
4 [[0, 0,
On Thu, Feb 18, 2010 at 8:29 AM, milton ruser wrote:
> Dear all,
>
> I have a data.frame with a column like the x shown below
> myDF<-data.frame(cbind(x=c("[[1, 0, 0], [0, 1]]",
> "[[1, 1, 0], [0, 1]]","[[1, 0, 0], [1, 1]]",
> "[[0, 0, 1], [0, 1]]")))
>> myDF
> x
> 1 [[1, 0,
On Thu, Feb 18, 2010 at 8:29 AM, milton ruser wrote:
> Dear all,
>
> I have a data.frame with a column like the x shown below
> myDF<-data.frame(cbind(x=c("[[1, 0, 0], [0, 1]]",
> "[[1, 1, 0], [0, 1]]","[[1, 0, 0], [1, 1]]",
> "[[0, 0, 1], [0, 1]]")))
>> myDF
> After identify the groups I wou
This is really an R-devel topic. But,
- Rmpi only suggests rsprng, and you are unlikely to need it.
- sprng2.0b.tar.gz builds from the sources without problem on Fedora
12, and then rsprng installs. See
http://sprng.fsu.edu/Version2.0/quick-start.html .
On Thu, 18 Feb 2010, Patrick Connolly
Hi!
I have just started learning R and today only I have joined this group. This is
my first mail and I wish to thank all of you for allowing me to be part of this
group.
I have following problem. I have an input.csv file such that
corp_id date investment_id rate
corp1 17-F
On 2/18/10, Frank E Harrell Jr wrote:
> How amazing that SAS is still used to produce reports that reviewers hate
> and that requires tedious low-level programming. R + LaTeX has it all over
>
To simplify things, R + LyX could also be a solution.
Liviu
__
Hello
I am using the following command but not able to text the values on the graph
can
someone please make suggestions for improvement
#here is the command
loc_mds <- cmdscale(dist.r, k = 7, eig = TRUE)
loc_mds$eig
sum(abs(loc_mds$eig[1:2]))/sum(abs(loc_mds$eig))
sum((loc_mds$eig[1:2])^2)
Hi Candan,
As a more general remark, there is a mailing list for spatial data,
including interpolation, r-sig-geo. This question would be more
appropriate there. I gave some answers in-line below from what I could
come up with. Reposting on r-sig-geo would be a good idea to get more
response
Hi All,
I use arules library, and try to create an association rules for this
transaction file:
a,c,f,3,4,5
b,e,1,2,4
a,c,e,f,1,3,4,5
d,5
b,c,e,f,1,2,3,4
a,c,e,f,1,3,4,5
b,c,e,f,1,3,4
b,e,1,2,4
a,c,e,f,1,3,4,5
a,b,c,e,f,1,3,4
a,c,d,f,3,4,5
I want to get the rule such:
{c,e,f}=> {3,4,5}
I
On 02/18/2010 05:31 PM, Peter Ehlers wrote:
I agree with Bill's advice, but if you want the easy way out,
try dotplot.mtb in package plotrix. Jim Lemon's done the job
for us.
In fact, Barry Rowlingson and Rolf Turner did the job, I just bask in
the glory of being the package maintainer.
Jim
I had a similar problem. In my case, I had a large table of data and wanted
to find and exclude a single huge value in one column (i.e. remove the
entire row). There were thousands of rows of data, and this single value
was more than 3x the next value, and at least 30x the typical value. I
want
Here is a solution using strapply in the gsubfn package.
First we define a proto object p containing a single method, i.e.
function, called fun. fun will take one [...] construct and split it
into the numeric vector v using strsplit and will also assign it
names. strapply has a built in variable
Dear 'R' friends
I have a sort of stupid question to ask.
I have a matrix say of the order 4 X 3 as
83 98 90
21 83 84
70 39 56
65 29 38
Is there any command in R which will reverse the order i.e. I need to have same
4 X 3 matrix but as given below
65 29 38
70
x[nrow(x):1,]
b
On Thu, Feb 18, 2010 at 11:45 AM, Amelia Livington
wrote:
> Dear 'R' friends
>
> I have a sort of stupid question to ask.
>
> I have a matrix say of the order 4 X 3 as
>
> 83 98 90
> 21 83 84
> 70 39 56
> 65 29 38
>
> Is there any command in R which will r
so, what did you expect instead as a result of an orderd logistic
regression?
See http://www.ats.ucla.edu/stat/R/dae/ologit.htm for interpretational help.
hth.
Mathew, Abraham T schrieb:
I ran the follow code for an ordered logit, but don't know why two levels of my
dependent variable are at t
It is easy to devolve into visceral response mode, lose objectivity and slip
into intolerance. R, S, S-Plus, SAS, PASW (nee SPSS), STATA, are all tools.
Each has strengths and weaknesses. No one is inherently better, or worse than
the other. The quality of the results produced by anyone of them
On Thu, Feb 18, 2010 at 9:21 AM, Anna Carter wrote:
> My objective is to rearrange filtered1 as
>
> date corp1 corp2 corp11 corp17
> 17-Feb 65 95 30 16
> 16-Feb 70 135
> 15-Feb 69 140
> 14-Feb 89
> 13-Feb 88
>
> #(Th
On Thu, Feb 18, 2010 at 12:14 PM, Barry Rowlingson
wrote:
> Ooh, so close! If you'd said 'reshaped' you'd be half way there:
.. because that is all in the reshape package. So do
> library(reshape)
first.
Oops
--
blog: http://geospaced.blogspot.com/
web: http://www.maths.lancs.ac.uk/~row
Dear R addicts,
Back in 2006, I posted the proposition for a tweak of the Sweave driver so that
PNG figures can be produced instead of eps/pdf :
https://stat.ethz.ch/pipermail/r-help/2006-March/102122.html
The amount of code modified is tiny (it was designed to induce minimal changes
to the of
Try this also;
xtabs(rate ~ date + corp_id + investment_id, data = DF)
On Thu, Feb 18, 2010 at 7:21 AM, Anna Carter wrote:
> Hi!
> I have just started learning R and today only I have joined this group. This
> is my first mail and I wish to thank all of you for allowing me to be part of
> thi
On Thu, Feb 18, 2010 at 12:12 PM, John Sorkin
wrote:
> It is easy to devolve into visceral response mode, lose objectivity and slip
> into intolerance. R, S, S-Plus, SAS, PASW (nee SPSS), STATA, are all tools.
> Each has strengths and weaknesses. No one is inherently better, or worse than
> the
Dear R community,
Is there an option to assign minimum and maximum values for z axis in 3D graph
using the function curve3d from the package emdbook? I know there are such
options for x and y axes.
Thanks.
Julia
Dear Carsten Dormann,
I would like to know if the quantitative linkage density measure given
by bipartite is weighted or not?
I couldn't find this information on the package manual or on previous
posts from R-help.in Tylyanakis et al 2007 (the reference provided in
bipartite package manual) both m
You're minimizing the log likelihood, but you want to minimize the *negative*
log likelihood. Also, optimize() is better than optim() if you have a
function of only one argument.
Replace
Jon Moroney wrote:
>
> #Create the log likelihood function
> LL<-function(x) {(trials*log(x))-(x*sumvect)}
Göran, David,
in order to adapt aftreg to my needs I wrote a little function that
I would like to share with you and the community.
WHAT DOES IT FIX?
(1) Using the id-argument in combination with missing values on
covariates wasn't easily possible before because the id-dataframe
and the da
I really like both of your responses. To add to Peter's thoughts, I've
found that more than half of SAS programmers can learn modern
programming languages given a push. And if pharmaceutical companies
ever knew the true cost of SAS in terms of their having to hire more
programmers to deal wit
Dear R users,
at March 15. & 16. there will be an R course for 'Parallel Computing
with R' in Munich at the super computer center. The course is part of
the Munich R Course series and gives an introduction to parallel
computing with R. Especially the R packages snow, snowfall, multicore,
nws
On 2010-02-18 1:04, Jim Lemon wrote:
On 02/18/2010 05:31 PM, Peter Ehlers wrote:
I agree with Bill's advice, but if you want the easy way out,
try dotplot.mtb in package plotrix. Jim Lemon's done the job
for us.
In fact, Barry Rowlingson and Rolf Turner did the job, I just bask in
the glory of
So what I'm looking for is readily available tools/packages that could
produce some of the following:
3.6 Summary of Useful Commands (STATA: Source:
http://www.ats.ucla.edu/stat/Stata/webbooks/logistic/chapter3/statalog3.htm)
* linktest--performs a link test for model specification, in our
ca
Hi everybody,
Does anyone know what problem may be with this test.
I am applying 5 different normality tests and use p-values for them, but for
some reason S-W gives me NA, while sample size is 100.
Any ideas?
Thanks a lot!
[[alternative HTML version deleted]]
_
HI All
I am newbie to using socket in R.
I wrote the function below in R for Windows.
The strange behavior is that each time I call this function R seems to hang
for I don't know what reason.
If I give at prompt line by line the commands below everything is fine:
connection happens immediately,
Hi
I would like to write a script that reads a list of variable names. These
variable names are some of the column headers in a data.frame. Then I want do a
for-loop to execute various operations on the specified variables in the
data.frame, but can't figure out how to do the necessary variable
bill.venab...@csiro.au wrote:
> I can't believe I'm saying this, but I think Peter is being a bit harsh on
> SAS.
>
> I prefer Greg Snow's analogy (in the fortune collection): If SPSS (or
>
SAS) and R were vehicles, SPSS would be the bus, going on fixed routes
and efficiently carrying lots of
Hello,
Jon Erik Ween wrote:
Hi
I would like to write a script that reads a list of variable names. These variable names
are some of the column headers in a data.frame. Then I want do a for-loop to execute
various operations on the specified variables in the data.frame, but can't figure out ho
Nooshin,
arules currently only implements mining algorithms that produce rules
with one item in the right-hand-side (RHS) (apriori, ruleInduction). If
you need rules with more than one items then you can mine frequent
itemsets (with eclat or apriori). Then you take each itemset, e.g., {1,
2,
Hi
I am new to using R and don't know understand the output of the npmc
package. What is the difference between the two R values, and which one
should I consider? Any advice on this would be helpful as would direction to
relevant links/literature.
Thank you
Gráinne
[[alternative HTML ver
Hey hey,
I`m analyzing a data set containing the element contentrations of various
samples...
I wanted to construct notched boxplots and got quite ugly results for some
of the boxplots. The notches are often larger then the hinges which resulted
in weird looking edges (even though I`m using a log-
Hi,
I've this dataframe:
V1 V5 V6
1 MOD13Q1_249 0.1723 A1
2 MOD13Q1_249 0.1824 B1
3 MOD13Q1_249 0.1824 C1
4 MOD13Q1_249 0.1774 A2
5 MOD13Q1_249 0.1953 B2
6 MOD13Q1_249 0.1824 C2
7 MOD13Q1_265 0.1921 A1
8 MOD13Q1_265 0.1938 B1
9
Trafim,
Could you please provide an example of the data?
I don't believe the sample size to be the driver of the issue you are
having.
For example,
set.seed(20)
x <- rnorm(100)
shapiro.test(x)
returns no error and a p value of 0.5519.
Sample (or actual) data would be helpful in diagnosing
optim really isn't intended for [1D] functions. And if you have a constrained search area,
it pays to use it. The result you are getting is like the second root of a quadratic that
you are not interested in.
You may want to be rather careful about the problem to make sure you have the function
If the data is fairly small, send it and the objective function to me off-list and I'll
give it a quick try.
However, this looks very much like the kind of distance-constrained type of problem like
the "largest small polygon" i.e., what is the maximum area hexagon where no vertex is more
than
Try this:
xtabs(V5 ~ V1 + V6, data = DF)
On Thu, Feb 18, 2010 at 1:50 PM, Alfredo Alessandrini
wrote:
> Hi,
>
> I've this dataframe:
>
> V1 V5 V6
> 1 MOD13Q1_249 0.1723 A1
> 2 MOD13Q1_249 0.1824 B1
> 3 MOD13Q1_249 0.1824 C1
> 4 MOD13Q1_249 0.1774 A2
>
Is there any easy way to pull out the row indexes for a logical
matching statment?
#example code#
foo <- data.frame(name=c(rep("A", 25), rep("B", 25), rep("C", 25),
rep("A", 25)), stuff=rnorm(100), and=rnorm(100), things=rnorm(100))
#this is
Dear all,
does anyone know how I can extract specific p-values for covariates
from an aftreg object? After fitting a model with aftreg I can find
all different variables by using str(), but there's no place where
p-values are kept. The odd thing is that print() displays them
correctly.
EXAM
Hi Stephen,
See below.
On Thursday 18 February 2010 11:01:25 am stephen sefick wrote:
> Is there any easy way to pull out the row indexes for a logical
> matching statment?
>
> #example
code#
> foo <- data.frame(name=c(rep("A", 25), rep("B"
If I understand your question, you can try this:
which(foo$name %in% c("A", "B"))
On Thu, Feb 18, 2010 at 2:01 PM, stephen sefick wrote:
> Is there any easy way to pull out the row indexes for a logical
> matching statment?
>
> #example code###
Hi:
You might also want to consider the use of subset, as in
subset(foo, name == "A")or
subset(foo, name %in% c("A", "B"))
HTH,
Dennis
On Thu, Feb 18, 2010 at 8:01 AM, stephen sefick wrote:
> Is there any easy way to pull out the row indexes for a logical
> matching statment?
>
>
Hi,
I am trying to use svm for regression data.
this is how my data looks like:
>dataTrain
x y z
1 4 6
2 5 4
3 7 5
>classTrain
a
2
3
4
>dataTest
x y z
1 7 2
2 8 3
>classTest
a
3
4
5
building the model
model<-svm(dataTrain,classTrain,type="nu-regression")
pred <- predict(mo
Göran, no worries - your help & advice is already invaluable!
Göran Broström wrote:
2010/2/18 Philipp Rappold :
Göran, David,
in order to adapt aftreg to my needs I wrote a little function that I would
like to share with you and the community.
I once promised to fix this 'asap'. Now I promis
At one time the "answer" would have been to buy a copy of Venables and
Ripley's "Modern Applied Statistics with S" (and R), and that would
still be a sensible strategy. There are now quite a few other R-
centric texts that have been published in the last few years. Search
Amazon if needed. Y
DISCLAIMER: This represents my personal view and in no way reflects that of
my company.
Warning: This is a long harangue that contains no useful information on R.
May be wise to delete without reading.
--
Sorry folks, I still don't understand your comments. As Cody's original post
pointe
Bert,
I have to disagree with just part of what you said. The ultimate
savings by using R is astronomical. Up front it would definitely cost
more, as you so eloquently stated. So it boils down to short-term vs.
long-term thinking.
More importantly, the statistical/graphical reports create
Dear R-users,
I often stack plots that have the same x-axis. To save space and have
the plots themselves as large as possible I like to minimize the margins
between the plots to zero. I use the "mfrow" and "mar" parameters to
achieve this.
However, the different margin settings for the individua
Dear all,
When I try to return some vectors from some functions within a function, it
indicate an error,"Â Error in rbind(ck1, ck2, ck3) : object 'ck1' not found",
in one of the iterations and stop. Since I am not experienced in programming,
can anyone give me a suggestion to inspect this
I don't seem to have the unnamed package loaded that has aftreg(), but
in general you ought to be able to get what you want by looking not
just at the aftreg-object but also at the print(aftreg)-object using
str().
On Feb 18, 2010, at 11:07 AM, Philipp Rappold wrote:
Dear all,
does anyo
2010/2/18 Philipp Rappold :
> Göran, David,
>
> in order to adapt aftreg to my needs I wrote a little function that I would
> like to share with you and the community.
I once promised to fix this 'asap'. Now I promise to do it tonight. OK?
Göran
>
>
> WHAT DOES IT FIX?
>
> (1) Using the id-argum
2010/2/18 Philipp Rappold :
> Dear all,
>
> does anyone know how I can extract specific p-values for covariates from an
> aftreg object? After fitting a model with aftreg I can find all different
> variables by using str(), but there's no place where p-values are kept. The
> odd thing is that print
This code works:
subset(NativeDominant.df,!ID=="37-R17")
This code does not:
Tree.df<-subset(NativeDominant.df,!ID==c("37-R17","37-R18","10-R1","37-R21","37-R24","R7A-R1","3-R1","37-R16"))
how do i get subset() to work on a range of values?
--
View this message in context:
http://n4.nabbl
Use ' %in%':
Tree.df<-subset(NativeDominant.df,!ID %in%
c("37-R17","37-R18","10-R1","37-R21","37-R24","R7A-R1","3-R1","37-R16"))
On Thu, Feb 18, 2010 at 3:59 PM, chipmaney wrote:
>
> This code works:
>
> subset(NativeDominant.df,!ID=="37-R17")
>
>
> This code does not:
>
> Tree.df<-subset(Native
subset(df, x %in% c(...))
chipmaney wrote:
This code works:
subset(NativeDominant.df,!ID=="37-R17")
This code does not:
Tree.df<-subset(NativeDominant.df,!ID==c("37-R17","37-R18","10-R1","37-R21","37-R24","R7A-R1","3-R1","37-R16"))
how do i get subset() to work on a range of values?
___
I'm observing odd behavior of the rep(...) procedure when using variables as
parameters in a loop. Here's a simple loop on a vector 'branches' that is
c(5,6,5,5,5). The statement in question is
print(c(ni,rep(i,times=ni)))
that works properly first time through the loop but the second
Hello,
Can summary.formula.reverse be customized to allow other summary
statistics to be reported rather than the quartiles and mean +/- sd? The
"fun" option apparently doesn't apply when method='reverse'
Thanks
--
Abhijit Dasgupta, PhD
Statistician | Clinical Sciences Section | NIAMS/NIH
Pure Food and Drug Act: 1906
FDA: 1930s
founding of SAS: early 1970s
(from the history websites of SAS and FDA)
What did pharmaceutical companies use for data analysis before there was
SAS? And was there much angst over the change to SAS from whatever was
in use before?
Or was there not such
Might be the case that the 'while' loop was not executed and therefore
'ck1' was not defined. You might put a check to see if that was
happening. Also you are incrementing 'count1' in the functions and it
is not being passed in as a parameter. What are you expecting it to
do? Is it defined in t
Cannot reproduce, what is branches? If you can narrow it down to a
"commented, minimal, self-contained, reproducible" example, you're far
more likely to get help from the list.
dkStevens wrote:
I'm observing odd behavior of the rep(...) procedure when using variables as
parameters in a loop.
Erik Iverson wrote:
Cannot reproduce, what is branches? If you can narrow it down to a
"commented, minimal, self-contained, reproducible" example, you're far
more likely to get help from the list.
My blinded guess though, is something to do with FAQ 7.31.
On Feb 18, 2010, at 12:33 PM, David Winsemius wrote:
I don't seem to have the unnamed package loaded that has aftreg(),
but in general you ought to be able to get what you want by looking
not just at the aftreg-object but also at the print(aftreg)-object
using str().
Trying that approach
Even though it may work for a small subset, it can still break on
larger sets. Your code was doing a number of 'unlist' and tearing
apart the data and it is possible that some of the transformations
were not aligned with the data in the way you thought them to be.
What you need to do in that case
Sorry, not reproducible. This works for me (as expected):
branches<-c(5,6)
iInd = 1
for(i in 1:length(branches)) {
print((1:branches[i])+iInd-1) # iInd is a position shift of the index
ni = branches[i]
print(i)
print(ni)
print(c(ni,rep(i,times=ni)))
# ... some interesting other
Should the svyby function be able to work with svyquantile? I get the
error below ...
data(api)
dclus1<-svydesign(id=~dnum, weights=~pw, data=apiclus1, fpc=~fpc)
svyby(~api00,
design=dclus1,
by = ~stype,
quantiles=c(.25,.5,.75),
FUN=svyquantile,
na.
Dearl list,
can anyone point me to a function or library that can create a graph similar to
the one in the following powerpoint presentation?
http://bmi.osu.edu/~khuang/IBGP705/BMI705-Lecture7.ppt
(pages 36-37)
In order to try to explain the graph, the way I see it in R terms is something
li
I'm uncertain how helpful it will be to give example code, but last week,
this model gave an error message to the tune of "failed to converge" after
about 5 minutes of run-time :
library(nlme)
model.A<- lme (fixed = avbranch~ wk*trt*pop
, random = ~wk|ID/fam/pop, data=branch)
It seemed
Thank you,
but:
- How do I create a condition object watching for memory overflow? Or,
alternatively, excess time?
- How do I tell R to end the current lapply item, clean up memory and proceed
to the next item? (I know I can write a wrapper function including gc() and
direct lapply to that wra
Well, yes and no. Obviously I was not asking for the complete recap of
a all the theory on the subject. My main concern is finding readily
available CRAN functions and packages that would help me in the
process. I've found the UCLA site to be very informative and spent a
lot of time ther the last c
Hey Jim,
That appears to work properly with my larger data set. That's really strange
to me though, why would my procedure not work even though the test works
correctly? I have always coded under the assumption that the code doesn't do
anything the user doesn't tell it too but I cant see a p
Hey, I have one data like this:
tree azimuth distance
1 312 200
2 322 201
3 304 173
4 294 154
5 313 95
The "azimuth" stands for the azimuth of tree from plot center, and the
"distance" is the distance of tree from plot center.
I wan
Hi,
I am using the command
>sample(c(0,1,2),1,prob=c(0.2,0.3,0.5))
and I have this error notification
"Error in sample(c(0,1,2),1,prob=c(0.2,0.3,0.5)):
unused argument(s)(1,prob=c(0.2,0.3,0.5))
I don't know what is going wrong. Please give me some suggestions.
Thank you
Best,
Jing
___
The key dates are 1938 and 1962. The FDC act of 1938 essentially mandated
(demonstration of) safety. The tox testing infrastructure grew from that.At
that time, there were no computers, little data, little statistics
methodology. Statistics played little role -- as is still mainly the case
today fo
On Thu, Feb 18, 2010 at 7:15 PM, Erik Iverson wrote:
>
>
> Erik Iverson wrote:
>>
>> Cannot reproduce, what is branches? If you can narrow it down to a
>> "commented, minimal, self-contained, reproducible" example, you're far more
>> likely to get help from the list.
>>
>
> My blinded guess thoug
?traceback may be useful.
Bert Gunter
Genentech Nonclinical Biostatistics
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of jim holtman
Sent: Thursday, February 18, 2010 10:17 AM
To: ROLL Josh F
Cc: r-help@r-project.org
Subject: R
Abhijit Dasgupta wrote:
Hello,
Can summary.formula.reverse be customized to allow other summary
statistics to be reported rather than the quartiles and mean +/- sd? The
Not easily. I'm not sure which other statistics would be descriptive
however; certainly not the min and max or standard e
Works for me:
> sample(c(0,1,2),1,prob=c(0.2,0.3,0.5))
[1] 2
On Thu, Feb 18, 2010 at 7:48 AM, wrote:
> Hi,
>
> I am using the command
>
> >sample(c(0,1,2),1,prob=c(0.2,0.3,0.5))
>
> and I have this error notification
>
> "Error in sample(c(0,1,2),1,prob=c(0.2,0.3,0.5)):
> unused argument(s)(1
Hi,
I am having trouble with svm regression.it is not giving the right results.
example
> model <- svm(dataTrain,classTrain,type="eps-regression")
> predict(model, dataTest)
36 37 38 39 40 41
42
-13.838257 -1.475401 10.502739 -3.047656 -8.71369
That's what I did. 'branches' was shown at the top.
branches = c(5,6,5,5,5)
I tested this. When I copy and paste into R 10.0 I get the result in the
post. Perhaps I should reinstall R. I guess I don't see how much more
narrow I can get than this.
i iInd = 1
for(i in 1:length(branches)
I could send the entire bit of code but I was hoping that someone would
recognize the issue from past experience. I may be an artifact of other
parts of the code. I observed the problem in a larger context and cut
out all that you see below. The comments were next to the results to
clarify. Ap
On 17.02.2010 15:38, rkevinbur...@charter.net wrote:
Thank you for the tip. I was used to inserting write statements and was
surpised when it didn't work and reading this section I see that I shouldn't
have been doing this anyway.
One more question. Is there another call that I can use to prin
What does
print(as.integer(branches))
give? That is what rep() uses.
/H
On Thu, Feb 18, 2010 at 7:49 PM, dkStevens wrote:
>
> I could send the entire bit of code but I was hoping that someone would
> recognize the issue from past experience. I may be an artifact of other
> parts of the code.
Dear gurus,
I've analyzed a (fake) data set ("data") using logistic regression (glm):
logreg1 <- glm(z ~ x1 + x2 + y, data=data, family=binomial("logit"),
na.action=na.pass)
Then, I created a data frame with 2 fixed levels (0 and 1) for each predictor:
attach(data)
x1<-c(0,1)
x2<-c(0,1)
y<-c(0,1
Hi useRs,
This is not so much a help request as it is a request for feedback about the
possibilities of using Natural Language Processing (NLP) techniques on the
r-help archives for a more 'effective' retrieval of answers.
A few points that may capture what I'm trying to get at:
1) R has an emergi
On 18.02.2010 17:54, madhu sankar wrote:
Hi,
I am trying to use svm for regression data.
this is how my data looks like:
dataTrain
x y z
1 4 6
2 5 4
3 7 5
classTrain
a
2
3
4
dataTest
x y z
1 7 2
2 8 3
classTest
a
3
4
5
building the model
model<-svm(dataTrain,cl
On 18.02.2010 19:43, madhu sankar wrote:
Hi,
I am having trouble with svm regression.it is not giving the right results.
example
model<- svm(dataTrain,classTrain,type="eps-regression")
predict(model, dataTest)
36 37 38 39 40 41
42
-13.838257
On 19/02/2010, at 1:12 AM, John Sorkin wrote:
> It is easy to devolve into visceral response mode, lose objectivity and slip
> into intolerance. R, S, S-Plus, SAS, PASW (nee SPSS), STATA, are all tools.
> Each has strengths and weaknesses. No one is inherently better, or worse than
> the other
On Thu, Feb 18, 2010 at 12:36 PM, Bert Gunter wrote:
> The key dates are 1938 and 1962. The FDC act of 1938 essentially mandated
> (demonstration of) safety. The tox testing infrastructure grew from that.At
> that time, there were no computers, little data, little statistics
> methodology. Statist
Christopher W. Ryan wrote:
Pure Food and Drug Act: 1906
FDA: 1930s
founding of SAS: early 1970s
(from the history websites of SAS and FDA)
What did pharmaceutical companies use for data analysis before there was
SAS? And was there much angst over the change to SAS from whatever was
in use bef
Points regarding the advantages of LaTex are very well taken. If I were
fortunate enough to have complete ownership of the document (as might be the
case with a DSMB report produced by the Biostat group), then LaTex would be a
wonderful choice. Though I am not a LaTex user, I can easily imagine
I have run a kruskal.test() using the by() function, which returns a list of
results like the following (subset of results):
Herb.df$ID: 4-2
Kruskal-Wallis chi-squared = 18.93, df = 7, p-value = 0.00841
Herb.df$ID: 44-1
I am old enough. Memory isn't always reliable but Doug Bates recounting
is what I remember and a quick search has BMDP developed in 1961 and SAS
in 1966. To my surprise, the search produced a site that offered BMDP
for sale.
On 2/18/2010 11:15 AM, Peter Dalgaard wrote:
Christopher W. Ryan w
On 18-Feb-10 18:58:57, Dimitri Liakhovitski wrote:
> Dear gurus,
> I've analyzed a (fake) data set ("data") using logistic regression
> (glm):
>
> logreg1 <- glm(z ~ x1 + x2 + y, data=data, family=binomial("logit"),
> na.action=na.pass)
>
> Then, I created a data frame with 2 fixed levels (0 and
1 - 100 of 149 matches
Mail list logo