Hi,
If I have two vectors x and y and I do lm(y~x) and now I want to define
variables that are the standard errors of the slope and intercept, how do I
do that?
Thanks,
Alex
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing
stderr_int <- summary(lm(y ~ x))$coefficients[1,2]
stderr_slope <- summary(lm(y ~ x))$coefficients[2,2]
Jeff.
On Oct 3, 2007, at 3:01 AM, Alexander Moreno wrote:
> Hi,
>
> If I have two vectors x and y and I do lm(y~x) and now I want to
> define
> variables that are the standard errors of the
On Wednesday 03 October 2007 08:23:15 Chung-hong Chan wrote:
CC >
CC > Suppose I have a long list of age, gender and bmi from a data.frame
CC > called msltdata.
CC >
CC > > age <- msltdata$age
CC > > gender <- msltdata$data
CC > > bmi <-msltdata$bmi
CC > > age
CC > [1] 5 10 14
CC > > gender
CC > [
Yes, I tried. Wrong calculation.
On 10/3/07, Stefan Grosse <[EMAIL PROTECTED]> wrote:
> On Wednesday 03 October 2007 08:23:15 Chung-hong Chan wrote:
> CC >
> CC > Suppose I have a long list of age, gender and bmi from a data.frame
> CC > called msltdata.
> CC >
> CC > > age <- msltdata$age
> CC >
> "BL" == Birgit Lemcke <[EMAIL PROTECTED]>
> on Mon, 1 Oct 2007 17:30:16 +0200 writes:
BL> When I try to load the DCluster package I get the
BL> following error message (R 2.5.1; PowerBook G4; Mac OS X
BL> 10.4.10)
BL> (Load needed package) Lade n�tiges Paket: spdep
On Tuesday 02 October 2007 22:54:48 Matthew Dubins wrote:
MD > in what respects do R routines work faster/more efficiently/more
MD > accurately than those of MATLAB/SPSS.
There has been a benchmark:
http://www.sciviews.org/benchmark/index.html
but thats quite old old, it would be interesting to s
I found you online...
Can you help with empirical probability?
Hi Partha. I really liked your email that you sent me, it really inspired me. I
have been breezing through the chapters, and doing quite well, You should be a
teacher. After all the time my college instructor spent with the cla
Dear listers,
I'm using gam(from mgcv) for semi-parametric regression on small datasets(10 to
200 observations), and facing a problem of overfitting.
According to the help, it is suggested to avoid overfitting by inflating the
effective degrees of freedom in GCV evaluation with increased "gamma
What sort of model structure are you using? In particular what is the response
distribution? For poisson and binomial then overfitting can be a sign of
overdispersion and quasipoisson or quasibinomial may be better. Also I would
not expect to get useful smoothing parameter estimates from 10 data
I don't think it is so much that the R routines
work faster/more efficiently/more accurately
but that the user works faster/more efficiently/
more accurately.
Patrick Burns
[EMAIL PROTECTED]
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and "A Guide for the Unwilling S User")
M
On 03/10/2007 2:23 AM, Chung-hong Chan wrote:
> Thanks for your answer. But I have a strange question that I don't
> know how to explain and I really don't know how to spot the
> problematic part.
>
> Suppose I have a long list of age, gender and bmi from a data.frame
> called msltdata.
>
>> age
Dear All,
Some simple interpretation assistance.
I am producing regression trees, based on a set of 10 continuous predictors.
Have cross validated to fit the model to minimum deviance, prior to summary
output.
I have not found a source that clearly states the meaning of the residual
mean devianc
zhijie zhang wrote:
> Dear friends,
>The following is an example to explain my question. I want to get a
> legend which will show the z-values according to different colors in image()
> function.
>
> x<-sort(runif(10)) #x-coordinates
> y<-sort(runif(10)) #y-coordinates
> z<-matrix(runif(100)
Yeah, it works fine now.
I think I need to modify my function to check for factor.
Regards,
C
On 10/3/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> On 03/10/2007 2:23 AM, Chung-hong Chan wrote:
> > Thanks for your answer. But I have a strange question that I don't
> > know how to explain and I
On Wed, 3 Oct 2007, Patrick Burns wrote:
> I don't think it is so much that the R routines
> work faster/more efficiently/more accurately
> but that the user works faster/more efficiently/
> more accurately.
And in particular a user can do many informative/insightful/penetrating
statistical/grap
Hi R-list members,
Could somebody explain to me the meaning of the '.' in the formula
SumTL~. below? I could not find it in the help pages. I'm guessing it is
substituted by v1+v2+v3+.. for all independent variables vi.
Furthermore, I would like to add interaction effects to the model,
is this a
How can i print only the P-Value of the kolmogorov smirnov test?
> ks.test(VeriSeti1, VeriSeti2)
Two-sample Kolmogorov-Smirnov test
data: VeriSeti1 and VeriSeti2
D = 0.5, p-value = 0.4413
alternative hypothesis: two-sided
This expression gives me the whole test results, i need only t
I appreciate your quick reply.
I am using the model of the following structure :
fit <- gam(y~x1+s(x2))
,where y, x1, and x2 are quantitative variables.
So the response distribution is assumed to be gaussian(default).
Now I understand that the data size was too small.
Thank you.
Best Wishes,
A
I've rolled up R-2.6.0.tar.gz a short while ago. This is a development
release which contains a number of new features. In particular, the handling
of data with a large number of identical strings should be more
memory-efficient.
Also, a number of mostly minor bugs have been fixed. See the full
I want to use the plot(model) function to generate Tukey-anscomb and Q-Q plots
of a lm(). I manage to change all labels but the main one which apparently is
neither main or sub. So far I have tried as par setting: cex (changes symbol
size within the plot), cex.main (no effect), cex.sub (no effec
ks.test(x,y)[2]
On 10/3/07, Emre Unal <[EMAIL PROTECTED]> wrote:
> How can i print only the P-Value of the kolmogorov smirnov test?
>
>
> > ks.test(VeriSeti1, VeriSeti2)
>
> Two-sample Kolmogorov-Smirnov test
>
> data: VeriSeti1 and VeriSeti2
> D = 0.5, p-value = 0.4413
> alternative hypo
Thanks for sending the data...
The problem is triggered by only having one observation per district, so that
you have a random effect for each datum. This causes the correlation of the
parameter estimates/predictions for the smooth term to become so high that
covariance matrix of the smooth coe
On Wednesday 03 October 2007 10:49, Ariyo Kanno wrote:
> I appreciate your quick reply.
> I am using the model of the following structure :
>
> fit <- gam(y~x1+s(x2))
>
> ,where y, x1, and x2 are quantitative variables.
> So the response distribution is assumed to be gaussian(default).
>
> Now I un
Peter Dalgaard wrote:
> I've rolled up R-2.6.0.tar.gz a short while ago. This is a development
> release which contains a number of new features. In particular, the handling
> of data with a large number of identical strings should be more
> memory-efficient.
>
> Also, a number of mostly minor b
On 03/10/2007 5:41 AM, [EMAIL PROTECTED] wrote:
> I want to use the plot(model) function to generate Tukey-anscomb and Q-Q
> plots of a lm(). I manage to change all labels but the main one which
> apparently is neither main or sub. So far I have tried as par setting: cex
> (changes symbol size w
On Wed, 3 Oct 2007, [EMAIL PROTECTED] wrote:
I want to use the plot(model) function to generate Tukey-anscomb and Q-Q
plots of a lm(). I manage to change all labels but the main one which
apparently is neither main or sub. So far I have tried as par setting:
cex (changes symbol size within the
Hi,
why don't you try try
ks.test(VeriSeti1, VeriSeti2)$p.value
All the best
Jenny
>How can i print only the P-Value of the kolmogorov smirnov test?
>
>
>> ks.test(VeriSeti1, VeriSeti2)
>
>Two-sample Kolmogorov-Smirnov test
>
>data: VeriSeti1 and VeriSeti2
>D = 0.5, p-value = 0.4413
Try
str(ks.test(VeriSeti1, VeriSeti2)) # see ?str
then
ks.test(VeriSeti1, VeriSeti2)$p.value
Med venlig hilsen
Frede Aakmann Tøgersen
> -Oprindelig meddelelse-
> Fra: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] På vegne af Emre Unal
> Sendt: 3. oktober 2007 11:32
> Til: r-h
Dear All,
With the following code:
pdf(file="figure.pdf",family="URWPalladio")
plot(0,0,type="n")
text(0,0,expression(integral(f(x)*dx, a, b)))
dev.off()
the integral symbol gets horrible. With other fonts, the same does not
occur. Is there some way of using Palatino-like fonts with a nice
integ
Hi
Paul Smith wrote:
> Dear All,
>
> With the following code:
>
> pdf(file="figure.pdf",family="URWPalladio")
> plot(0,0,type="n")
> text(0,0,expression(integral(f(x)*dx, a, b)))
> dev.off()
>
> the integral symbol gets horrible. With other fonts, the same does not
> occur. Is there some way o
On 10/3/07, Paul Murrell <[EMAIL PROTECTED]> wrote:
> > With the following code:
> >
> > pdf(file="figure.pdf",family="URWPalladio")
> > plot(0,0,type="n")
> > text(0,0,expression(integral(f(x)*dx, a, b)))
> > dev.off()
> >
> > the integral symbol gets horrible. With other fonts, the same does not
I want to get a sample of some arbitrary size from a population having only two
values 0 and 1 with replacement, but with different probability for selection.
For example 0 will be selected with probability 0.4 and 1 with 0.6. I could use
sample function i.e. sample(c(0,1), 30, T) to get this, h
On 10/3/07, stat stat <[EMAIL PROTECTED]> wrote:
> I want to get a sample of some arbitrary size from a population having only
> two values 0 and 1 with replacement, but with different probability for
> selection. For example 0 will be selected with probability 0.4 and 1 with
> 0.6. I could use
Hi Folks,
The question I'm asking, regarding the use of function
definitions in the context described below, is whether
there are subtle traps or obscure limitations I should
watch out for. It is probably a rather naive question...
Quite often, one has occasion to execute interactively
a lot of R
Hi,
Can't you do something like:
x <- c(1,1,1,1,1,1,0,0,0,0)
sample(x, 30, T)
Best Wishes,
Jenny
>X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on hypatia.math.ethz.ch
>X-Spam-Level:
>X-Spam-Status: No, score=0.0 required=5.0 tests=DKIM_SIGNED, DKIM_VERIFIED,
HTML_MESSAGE autolearn
Hello,
I have a script in R language that makes sorting using the order() method of
R language. What I need is to reimplement this method in any other language
(PHP, Perl, Python, Java maybe).
First I tried to reimplement it in php, here is some numbers that i need to
sort:
1,2,3,4,5,6,7,8,9,10,
You're sorting characters in the PHP script instead of numbers!
sort(as.character(o)) in R will yield the same results as you PHP script
does.
Thierry
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Res
I am using R.2.4.1 on Windows XP 5.1 (SP 2).
I have the following line in my R code.
Analysis=anova(lm(PM ~ x))
It works the first 60 or so times it is called but
then I get the following error message.
Error in lm.fit(x, y, offset = offset, singular.ok =
singular.ok, ...) : 0 (non-NA) cases
I
hi,
i was trying to perform a function from the RDCOMClient package. to make
it short: i tried to run the example in ?.COM, namely
e <- COMCreate("Excel.Application")
books <- e[["Workbooks"]]
as soon as e[["Workbooks"]] is called, the RGUI crashes. debugging the
function .COM reveals that some
I have generally found that both RDCOMClient and rcom do work although
around the time of new R versions one might be ready for it before the other
so try both.
On 10/3/07, Marcin Kopaczynski <[EMAIL PROTECTED]> wrote:
> hi,
>
> i was trying to perform a function from the RDCOMClient package. to m
Hello, Thierry
Thank you for the fast reply
actually my php script is not correct, the correct logic is R script, by
writing this peace of code in php I tried to do the same logic that R script
does. But this php programm algorirm probably is not correct so results are
incorrect also.
I am tryi
Sorry, let me fix 1 sentence.
"Here I try to mean by "overfitting" that GCV was significantly SMALLER
than the mean square error of prediction of the validation data, which
was randomly selected and not used for regression."
> Thank you for valuable advices.
> I'm sorry Dr. N. Wood that by mistak
>
> "Here I try to mean by "overfitting" that GCV was significantly SMALLER
> than the mean square error of prediction of the validation data, which
> was randomly selected and not used for regression."
--- so you could try increasing gamma until this is no longer the case.
--
> Simon Wood, Ma
1. you could place the commands in a file and source the file each
time you want to run it or it might be good enough to place it on the
clipboard and then just do source("clipboard")
2. Thomas Lumley's defmacro in R News 1/3 could be used
Neither of these two require that you do anything specia
Thank you for valuable advices.
I'm sorry Dr. N. Wood that by mistake I sent this reply firstly to
your personal e-mail address.
I will use the "min.sp" argument when the data size is very small. I'd
like to know if there is any criteria for selecting "min.sp."
I compared gamma=1.0 and 1.4, and I
Hello Randy
I get emails like this quite a lot.
The most likely problem is in the installation of
the gsl library. To verify that it is in fact installed,
try to compile and run the little Bessel function
example given in gsl-ref, section 2.1.
Get this working first. If it works, this means
t
Thank you for your advices.
I will try even increased "gamma" values, and all-out cross-validations.
2007/10/3, Frank E Harrell Jr <[EMAIL PROTECTED]>:
> Ariyo Kanno wrote:
> > Sorry, let me fix 1 sentence.
> >
> > "Here I try to mean by "overfitting" that GCV was significantly SMALLER
> > than t
Ariyo Kanno wrote:
> Sorry, let me fix 1 sentence.
>
> "Here I try to mean by "overfitting" that GCV was significantly SMALLER
> than the mean square error of prediction of the validation data, which
> was randomly selected and not used for regression."
>
>> Thank you for valuable advices.
If yo
Newbie here (to R) and running Linux...
> install.packages("gsl","~/R")
...
trying URL 'http://cran.wustl.edu/src/contrib/gsl_1.8-4.tar.gz'
Content type 'application/x-tar' length 57051 bytes
opened URL
==
downloaded 55Kb
* Installing *source* pack
Dae-Jin,
Thanks for your (offline) persistance: you are right the problem is with
`gamm'. There was a dimension dropping error in `formXtViX' (called by
`gamm') for group sizes of 1 (as occurs when you have a random effect per
observation). This will be fixed in mgcv_1.3-28.
best,
Simon
On T
Thanks for the suggestions, Gabor!
On 03-Oct-07 12:52:51, Gabor Grothendieck wrote:
> 1. you could place the commands in a file and source the file each
> time you want to run it or it might be good enough to place it on the
> clipboard and then just do source("clipboard")
Using the file solutio
Vladimir Eremeev wrote:
>
>
> XpeH wrote:
>>
>> I am trying to understand how the order method in R language works, and
>> then I'd like to do the same in some other language.
>>
>
> 1. Type order in R command prompt (without "(" and ")") and press enter.
> This will print you the body of
XpeH wrote:
>
> I am trying to understand how the order method in R language works, and
> then I'd like to do the same in some other language.
>
1. Type order in R command prompt (without "(" and ")") and press enter.
This will print you the body of this function, so you can inspect it. You
c
Hi
[EMAIL PROTECTED] napsal dne 03.10.2007 14:56:19:
> I am using R.2.4.1 on Windows XP 5.1 (SP 2).
Upgrade R. Version 2.6.0 is imminent.
>
> I have the following line in my R code.
>
> Analysis=anova(lm(PM ~ x))
>
> It works the first 60 or so times it is called but
> then I get the followi
Dear all
Sorry to bother you.
I have a GLS model:
modela<-gls(pop2~pop1-1 + ccs + pop1:ccs,data=data2,corr = corSpher(c(5, 0.5
),form = ~ latitude + longitude, nugget = TRUE), method = "ML")
I was wondering how I can work out how much variance is explained by the model
AND by each term.
Any h
Hello,
I have a question regarding the use of an offset term with survreg(),
in the Survival library. In particular, I am trying to figure out on
what scale the offset term should be.
Here's a simple example with no censoring and no coefficients:
-
y = rlnorm(1000, meanlog = 10, sdlog =
On 10/3/07, Ted Harding <[EMAIL PROTECTED]> wrote:
> Thanks for the suggestions, Gabor!
>
> On 03-Oct-07 12:52:51, Gabor Grothendieck wrote:
> > 1. you could place the commands in a file and source the file each
> > time you want to run it or it might be good enough to place it on the
> > clipboar
--- Farrel Buchinsky <[EMAIL PROTECTED]> wrote:
> How do you create a table from a data frame? I tried
> as.table(
> name.of.data.frame) but it bombed out.
> I will include the exact error message in my next
> posting. If I recall
> correctly, it said that the data.frame could not be
> coerced to
Did you check whether 'junk4.RData' was created and what its length was
- maybe an empty file is being created. Is there some sort of quota or
permissions problem? My suggestion would be to look at the size and
permissions on the directory and the file. If you need more help, I
would suggest
Hi list,
I'm currently processing textual data and I would really appreciate some
help with one off my problems.
I have a set of strings and I want to count how often each of this
strings appears in this set.
This is not very difficult and can be done as:
TB<-table(my_set)
plot(TB)
However, I
How do you determine if one string is a subset of another? Does it
only match at the beginning, or anywhere? How large is your set of
strings? Can you use table as you describe and then determine what
the groupings of subsets are and then just add the numbers together?
You can use grep/regexpr t
Hi all,
I am using R trying to get a inverse matrix of (X^T)X , but I keep getting
the error
message like: no b argument and no default value for sprintf(gettext(fmt,
domain = domain), ...) .
# my code
To everyone who answered
Many thanks for the explanations. I think I see what
is happening now
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
You missed one very important line:
> library(Matrix)
This is not R, it is package Matrix, and the error is in that package, as
traceback() shows:
> traceback()
6: sprintf(gettext(fmt, domain = domain), ...)
5: gettextf("not-yet-implemented method for %s(<%s>, <%s>).\n ->> Ask the
package aut
On Wed, 3 Oct 2007, Ming-Wen An wrote:
> Hello,
>
> I have a question regarding the use of an offset term with survreg(),
> in the Survival library. In particular, I am trying to figure out on
> what scale the offset term should be.
>
> Here's a simple example with no censoring and no coefficient
Tony,
Thanks for return. Actually, the data object 'junk4.RData' was created but
have size 0. It seems no data was saved. But the real data that I
want to load have data in it, which I can't load use my own user account.
But if using other people's user account under the same system, it can be
> I thought about this some more, and I'm not sure that possibility is
> "to blame." In my time-dependent model, I don't think I'm doing
> anything different than is done for transplant in the Stanford
> Heart Study (the often used example for this kind of time-dependent
> covariate). As in my ca
Robin,
Thanks much for the reply. It turns out, for reasons which are still
unknown to me, that on this machine, doing the following:
% g++ bessel.cpp -o bessel -lgsl -lgslcblas
will tell me it can't find -lgsl, in spite of its path being on my
LD_LIBRARY_PATH. So, being explicit solves
Sounds like you are having permissions problems. And as you're using a
mix of Unix and WinXP, you might be suffering from some strange
permissions settings. WinXP allows a very rich set of permissions which
for many exotic combinations have no corresponding mapping to the much
simpler 9-octet
Hello,
I have been playing around with the statistical distributions in R, and
overall I think the accuracy is very good. However, it seems that for the
Student's t distribution, the CDF loses accuracy when evaluated at values
close to zero. For instance, I did the following in R
On Wed, 3 Oct 2007, Patrick Burns wrote:
> I don't think it is so much that the R routines
> work faster/more efficiently/more accurately
> but that the user works faster/more efficiently/
> more accurately.
Might this be a fortunes candidate? (Perhaps a larger excerpt)
Bert Gunter
Genentech
I've written the function below to simulate the mean 1st through nth
nearest neighbor distances for a random spatial pattern using the
functions nndist() and runifpoint() from spatsat. It works, but runs
relatively slowly - would appreciate suggestions on how to speed up
this function. Thanks. -
On 3 October 2007 at 12:25, Randy Heiland wrote:
| Thanks much for the reply. It turns out, for reasons which are still
| unknown to me, that on this machine, doing the following:
|
| % g++ bessel.cpp -o bessel -lgsl -lgslcblas
|
| will tell me it can't find -lgsl, in spite of its path bein
On 10/3/2007 7:56 AM, (Ted Harding) wrote:
> Hi Folks,
>
> The question I'm asking, regarding the use of function
> definitions in the context described below, is whether
> there are subtle traps or obscure limitations I should
> watch out for. It is probably a rather naive question...
>
> Quite
Hi all,
I have run into what appears to be a bug in ggplot2; however, I am new
to the ggplot syntax, so I might be missing a key element. The main
issue is that I cannot get geom_abline to plot when colour is used to
identify "group" in the main plot. When I remove colour, geom_abline
works b
Duncan Murdoch wrote:
> On 01/10/2007 11:45 PM, Edna Bell wrote:
>> Hi again.
>>
>> I'm sure that this is really simple.
>>
>> I'm trying to build a package on a Windows Vista machine. I use
>> Rcmd build --binary test
>>
>> but I get the "Please set TMPDIR to a valid temporary directory"
>>
>>
chevolot wrote:
> Dear mailing list:
>
> I am a new user of R and I would like to use the new ARES package. I have
> followed the procedure to import a genepopfile and everything seemed to
> work. However when I ran aresCalc function, I got a message error and I
> don't know what it means. The e
Metrum Institute and Random Technologies LLC are pleased to announce
the availability of the SASxport package.
The new SASxport package provides the ability to directly read, list,
and write SAS 'xport' files, including proper handling of custom SAS
formats. (This extends existing functional
both versions work on my pc, but on the pc where i have bloomberg
installed the things i described do not work. the rcom peckage works,
but comCreateObject creates an object which is somehow different from
what COMCreate creates. especially the Slot "ref": row is not present
anymore and so RBl
This is reproducible with R-2.6.0 for me. You might want to report it to
R-Core, the maintainer of that package.
Uwe Ligges
Benjamin Tyner wrote:
> R-helpers,
>
> n <- 100
> arcoefs <- c(0.8)
> macoefs <- c(-0.6)
> p <- length(arcoefs)
> q <- length(macoefs)
> require(nlme)
> tmp <- corARMA(va
If you take a look at what is happening with Rprof, you will see that
most of the time (96.6%) is being taken in the 'nndist' function, so
if you want to improve your algorithm, can you somehow reduce the
number of time you call it, or find a different approach. So it is a
function of the algorith
Hi, all,
(version info at end)
I'm running a script which takes input files, does some analysis, and
writes the output to csv files. Last night I ran the script (it took
~6.5 hours) thinking all would go well since it ran on a subset of the
data without issue. However, when I returned this mor
On 10/3/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> On 10/3/2007 7:56 AM, (Ted Harding) wrote:
> > Hi Folks,
> >
> > The question I'm asking, regarding the use of function
> > definitions in the context described below, is whether
> > there are subtle traps or obscure limitations I should
> > w
It shouldn't be hard, using the definition of GARCH(1,1).
Giovanni
> Date: Tue, 02 Oct 2007 17:14:27 -0400 (EDT)
> From: [EMAIL PROTECTED]
> Sender: [EMAIL PROTECTED]
> Importance: Normal
> Precedence: list
> User-Agent: SquirrelMail/1.4.6
>
> Hey,
>
> Is there any way to simulate a GARCH(1,1
On Wed, 3 Oct 2007, skylab gupta wrote:
> Hello,
>
> I have been playing around with the statistical distributions in R, and
> overall I think the accuracy is very good. However, it seems that for the
> Student's t distribution, the CDF loses accuracy when evaluated at values
> close to zero. For
It looks like you can pass a vector of neighbourhoods to nndist.
nndist(rpp, k=2:10)
Although you get some warnings, the answers appear to be the same.
all.equal(t(sapply(2:10, function(i) nndist(rpp, k=i))), nndist(rpp, k=2:10))
This might be quite a lot faster, depending on how much work is c
I use Windows, R version 2.5.1
When I try to run stepclass (klaR) I get an error message/warning saying:
1: error(s) in modeling/prediction step in: cv.rate(vars = c(model, tryvar),
data = data, grouping = grouping, ...
Actually, I look 16 warnings of this type. Can anyone tell me what this
On 3/10/2007, at 5:48 PM, Peter Dalgaard wrote:
> Rolf Turner wrote:
>> I have factors with levels ``Unit", "Achieved", and "Scholarship";
>> I wish to replace these with
>> "U", "A", and "S".
>>
>> So I do
>>
>> fff <- factor(fff,labels=c("U","A","S"))
>>
>> This works as long as all of
On 3/10/2007, at 5:10 PM, Christos Hatzis wrote:
> Would
>
> levels(fff) <- c("A","S","U")
>
> not work?
Well, not quite. This would scramble the levels. The levels of the
original fff are
c("Unit","Achieved","Scholarship") --- i.e. they are ***not*** in
alphabetical order
Run df from R; here's an example (run on Interix):
$ df /dev/fs/C/WINDOWS
Filesystem 512-blocks Used Available Capacity Type Mounted on
//HarddiskVolume2 77706400 34632424 4307397645% ntfs /dev/fs/C
See man df for details.
> -Original Message-
> From: [EMAIL
At www.jstatsoft.org we now have three special volumes
Volume 22, Ecology and Ecological Modelling in R (Thomas Kneib and
Thomas Petzoldt, eds.)
Volume 20, Psychometrics in R (Jan de Leeuw and Patrick Mair, eds.)
Volume 18, Spectroscopy and Chemometrics in R (Kate Mullen and Ivo
van Stokkum,
On 3/10/2007, at 8:30 PM, Patrick Burns wrote:
> I don't think it is so much that the R routines
> work faster/more efficiently/more accurately
> but that the user works faster/more efficiently/
> more accurately.
Well said, O Wise and Ancient One!!! :-) :-) :-)
Rolf Turner wrote:
>> Does it even work? (What if it is the first or the 2nd level that is
>> absent?
>
> Yes it works. What's the problem?
>
> To beat it to death: if the second level of fff is absent then
> fff will consist entirely of 1's and 3's,
> and so c("U","A","S")[fff] wil
On 4/10/2007, at 7:50 AM, Peter Dalgaard wrote:
> Rolf Turner wrote:
>>> Does it even work? (What if it is the first or the 2nd level that
>>> is absent?
>>
>> Yes it works. What's the problem?
>>
>> To beat it to death: if the second level of fff is absent
>> then fff will consist
My suggestion was based on the standard sort order of a factor.
What I did as an example was to create a factor having your specified
levels:
> x <- factor(sample(c("Unit","Achievement","Scholarship"), 10, TRUE))
> x
[1] Achievement UnitAchievement UnitScholarship Achievement
Unit
Hi, my name is Luis, and I have a problem with a dataset.
Its name is algae and count the collection of data in a lake and respective
proliferation of algae.
The parameters that it has are: "mxPH", "mnO2", "Cl", "NO3" "NH4", "oPO4",
"PO4", "Chla" and "a1" all numerics.
a1 - algae1
If I try to do S
Rolf Turner wrote:
> P.S. ***Are*** there any risks/dangers in following Christos
> Hatzis' suggestion of simply doing
>
> levels(fff) <- c("U","A","S") ???
Not if the levels are right to begin with.
Problems only arise if fff somehow becomes a two-level factor, e.g. if
yo
On 4/10/2007, at 8:29 AM, Peter Dalgaard wrote:
> Rolf Turner wrote:
>> P.S. ***Are*** there any risks/dangers in following Christos
>> Hatzis' suggestion of simply doing
>>
>> levels(fff) <- c("U","A","S") ???
> Not if the levels are right to begin with.
>
> Problems onl
Peter Dalgaard wrote:
> Rolf Turner wrote:
>
>> P.S. ***Are*** there any risks/dangers in following Christos
>> Hatzis' suggestion of simply doing
>>
>> levels(fff) <- c("U","A","S") ???
>>
> Not if the levels are right to begin with.
>
> Problems only arise if fff s
Your solution would work if the data frame contained the raw data. In that
case the table function as you outlined would be a table crossing all the
levels of column 1 with all the levels of column 2.
Instead my data frame is the table. It is an aggregate table (I may be using
the wrong buzzwords h
1 - 100 of 137 matches
Mail list logo