On Sun, 2018-01-21 at 09:59 +0100, Luigi Marongiu wrote:
> Dear all,
> I have a string, let's say "testing", and I would like to extract in
> sequence each letter (character) from it. But when I use substr() I only
> properly get the first character, the rest is empty (""). What am I getting
> wron
A. On Sat, 2018-03-31 at 15:45 +0200, Henri Moolman wrote:
> Could you please provide help with something from R that I find rather
> puzzling? In the small program below x[1]=1, . . . , x[5]=5. R also
> finds that x[1]<=5 is TRUE. Yet when you attempt to execute while, R does
> not seem to
Not necessarily. The R-help archives are publicly accessible,
with a "sort by date" option. So if someone sets up a web=page
monitor which reports back when new messages appear there
(at the bpottom end), then their email addresses are readily
copied (subject to " at " --> "@".
Once they have the
Apologies for disturbance! Just checking that I can
get through to r-help.
Ted.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/po
Well pointed out, Jim!
It is infortunate that the documentation for options(digits=...)
does not mention that these are *significant digits* and not
*decimal places* (which is what Joshua seems to want):
"‘digits’: controls the number of digits to print when
printing numeric values."
On the fa
ng of Numbers", covering the
functions ceiling(), floor(), trunc(), round(), signif().
Well worth reading!
Best wishes,
Ted.
On Thu, 2018-05-31 at 08:58 +0200, Martin Maechler wrote:
> >>>>> Ted Harding
> >>>>> on Thu, 31 May 2018 07:10:32 +0100 writes:
>
On Mon, 2018-06-25 at 09:46 +1200, Rolf Turner wrote:
> Does/should one say "the degrees of freedom is defined to be" or "the
> degrees of freedom are defined to be"?
>
> Although value of "degrees of freedom" is a single number, the first
> formulation sounds very odd to my ear.
>
> I would li
On Tue, Jul 3, 2018 at 9:25 AM, J C Nash wrote:
>
> > . . . Now, to add to the controversy, how do you set a computer on fire?
> >
> > JN
Perhaps by exploring the context of this thread,
where new values strike a match with old values???
Ted
__
R-hel
I've been following this thread, and wondering where it might lead.
My (possibly naive) view of these matters is basically logical,
relying on (possibly over-simplified) interpretaions of "NA" and "NaN".
These are that:
"NaN" means "Not a Number", though it can result from a
numerical calculatio
Pietro,
Please post this to r-help@r-project.org
not to r-help-ow...@r-project.org
which is a mailing liat concerned with list management, and
does not deal with questions regarding the use of R.
Best wishes,
Ted.
On Sat, 2018-07-14 at 13:04 +, Pietro Fabbro via R-help wrote:
> I will try to b
gits = 53 binary places.
So this normally "almost" trivial feature can, for such a simple
calculation, lead to chaos or catastrophe (in the literal technical
sense).
For more detail, including an extension of the above, look at the
original posting in the R-help archives for Dec 22, 2
[I unadvertently sent my reply below to Jeremie, instead of R-help.
Also, I havve had an additional thought which may clarify things
for R users].
[Original reply]:
The point about this is that (as Rolf wrote) FALSE & (anything)
is FALSE, provided logical NA is either TRUE ot FALSE but,
because the
On Thu, 2017-07-20 at 14:33 +0200, peter dalgaard wrote:
> > On 10 Jan 2013, at 15:56 , S Ellison wrote:
> >
> >
> >
> >> I am working with large numbers and identified that R looses
> >> precision for such high numbers.
> > Yes. R uses standard 32-bit double precision.
>
>
> Well, for large
tat.uiowa.edu/~luke/R/references/weakfinex.html
>> >>
>> >> As far as I can tell, weakrefs are only available via the C API. Is
>> >> there a way to do what I want in R without resorting to C code? Is
>> >> what I want to do better achieved using something other than weakrefs?
check out the data.table package, as suggested.
>
> -- Bert
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>
>>>>>> options(latexcmd='pdflatex')
>>>>>>> options(dviExtension='pdf')
>>>>>>>
>>>>>>> ## Macintosh
>>>>>>> options(xdvicmd='open')
>>>>>>>
>>>>>>> ## Windows
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
l data for 200
> people over one co-variate. So I was hoping instead of completely removing
> the rows, to just somehow acknowledge that the data for this particular
> co-variate is missing in the model but not completely remove the row? This
> is more what I was hoping someone would know if it
something special about the numbers around
>>> 1000.
>>>
>>> Back to the quesion at hand: I can avoid use of round() and speed things
>>> up
>>> a little bit by just adding a small number after multiplying by 1000:
>>>
>>>> ptm &
_
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
I should have added an extra line to the code below, to complete
the picture. Here it is (see below line "##".
Ted.
On 11-Jan-2015 08:48:06 Ted Harding wrote:
> Troels, this is due to the usual tiny difference between numbers
> as computed by R and the numbers that yo
t;.
Implementing the above as a procedure:
agegrp[max(which(cumsum(y1994)/sum(y1994)<0.5)+1)]
# [1] "55-64"
Note that the "obvious solution":
agegrp[max(which(cumsum(y1994)/sum(y1994) <= 0.5))]
# [1] "45-54"
give
Sorry, a typo in my reply below. See at "###".
On 12-Jan-2015 11:12:43 Ted Harding wrote:
> On 12-Jan-2015 10:32:41 Erik B Svensson wrote:
>> Hello
>> I've got a problem I don't know how to solve. I have got a dataset that
>> contains age intervals
people object to code "clutter" from parentheses that could
be more simply replaced (e.g. "var< -4" instead of "var<(-4)"),
but parentheses ensure that it's right and also make it clear
when one reads it.
Best wishes to all,
Ted.
---
ewhere
in the spreadsheet? (Excel is notorious for planting things invisibly
in its spreadsheets which lead to messed-up results for no apparent
reasion ... ).
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding)
Date: 09-Feb-2015 Time: 22:15:44
This me
I think that one can usefully look at this question from the
point of view of what "NaN" and "NA" are abbreviations for
(at any rate, according to the understanding I have adopted
since many years -- maybe over-simplified).
NaN: Mot a Number
NA: Not Available
So NA is typically used for missing v
Before the ticket finally enters the waste bin, I think it is
necessary to explicitly explain what is meant by the "domain"
of a random variable. This is not (though in special cases
could be) the space of possible values of the random variable.
Definition of (real-valued) Random Variable (RV):
Le
Sorry -- stupid typos in my definition below!
See at ===*** below.
On Tue, 2018-10-23 at 11:41 +0100, Ted Harding wrote:
Before the ticket finally enters the waste bin, I think it is
necessary to explicitly explain what is meant by the "domain"
of a random variable. This is not (though
example, the cumulative probability of reaching a point
> outside the cube (u or v or w > A) is 1 however, the bigger cube does not
> exists (because the Q is the reference space). Other words, I feel that we
> extend the space to accommodate any cube of any size! Looks a bit weird to
>
On Mon, 2018-12-10 at 22:17 +0100, Fatma Ell wrote:
> Dear all,
> I'm trying to use ks.test in order to compare two curve. I've 0 values i
> think this is why I have the follonwing warnings :impossible to calculate
> exact exact value with ex-aequos
>
> a=c(3.02040816326531, 7.95918367346939, 10.
4*a*b
MEAN^2 - 3*SD^2 = a*b
Hence for a >= 0 and b > a you must have MEAN^2 >= 3*SD^2.
Once you have MEAN and SD satisfying this constraint, you should
be able to solve the equations for a and b.
Hoping this helps,
Ted.
-
E-Mail: (Ted Hardi
d) %*% Data_Normalized)/(dim(Data_Normalized)[1]-1)
and compare the result with
cor(Data)
And why? Look at
?sd
and note that:
Details:
Like 'var' this uses denominator n - 1.
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding)
Date: 12-Aug-2014 Time: 22:32:26
This message w
cation).
The important thing when using pre-programmed functions is to know
which is being used. R uses (n-1), and this can be found from
looking at
?sd
or (with more detail) at
?cor
Ron had assumed that the denominator was n, apparently not being aware
that R
On 12-Aug-2014 22:22:13 Ted Harding wrote:
> On 12-Aug-2014 21:41:52 Rolf Turner wrote:
>> On 13/08/14 07:57, Ron Michael wrote:
>>> Hi,
>>>
>>> I would need to get a clarification on a quite fundamental statistics
>>> property, hope expeRts here would
17mother
107 09sibling
107 18father
107 19mother
108 16sibling
108 NAfather
108 NAmother
109 17sibling
109 NAfather
109 NAmother
That's the data. Now a litt
wer to your question as
>> abs(qt(0.408831/2, df=1221)), but you'll get 4.117.
>>
>> Duncan Murdoch
>>
>>
>>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailma
to generate 1000 numbers from N(u, a^2), however I don't
> want to include 0 and negative values. How can I use beta distribution
> approximate to N(u, a^2) in R.
>
> Thx for help
-
E-Mail: (Ted Harding)
Date: 15-Sep-2015 Time: 16:12
p
Towards the bottom of this page is a section "Subscribing to R-help".
Follow the instructions in this section, and it should work!
Best wishes,
Ted.
---------
E-Mail: (Ted Harding)
Date: 14-Oct-2015 Time: 19:34:55
>> >> >> Ista
>>> >> >>
>>> >> >> On Fri, Oct 30, 2015 at 9:15 PM, Val
>>wrote:
>>> >> >>> Hi all,
>>> >> >>> Iam trying to change character to numeric but have probelm
>&
lpful replies could be sent.
>
> A milder alternative is to encourage some R-help subscribers to click the
> "Don't send" or "Save" button and think better of their replies.
>
>
> --
> Michael Friendly Email: friendly AT yorku DOT ca
> Professor, Psyc
defaults
> doesn't change that.
>
> I don't _think_ things have changed at my end ...
>
> Steve Ellison
-
E-Mail: (Ted Harding)
Date: 04-Feb-2016 Time: 17:03:37
This message was sent by XFMail
_
>
> saludos,
> José
---------
E-Mail: (Ted Harding)
Date: 04-Feb-2016 Time: 22:01:42
This message was sent by XFMail
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/list
esDemonâ
>>
>> Finally I tried
>>
>>> install.packages("devtools") library(devtools)
>>> install_github("ecbrown/LaplacesDemon")
>>
>> but I am not able to install devtools (for similar reasons). So my
>> questions are:
>
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/
>> posting
gt; R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide http://www.R-project.org/
>>> posting-guide.html
>>> and provide commented, minimal, self-contained, reprodu
w it ought to be approached; and I don't have
R code to refer to and experiment with (that of contour() is
hidden in its "method").
But people out there must have faced it, and I'd be grateful
for their own feedback from the coal-face!
With thanks,
Ted.
----
Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Ted Harding
> Sent: Wednesday, July 16, 2008 1:17 PM
> To: [EMAIL PROTECTED]
> Subject: [R] Labelling curves on graphs
>
> Hi Folks,
> I'd be grateful for good suggestions about
On 17-Jul-08 02:32:41, Frank E Harrell Jr wrote:
> Barry Rowlingson wrote:
>> 2008/7/16 Ted Harding <[EMAIL PROTECTED]>:
>>
>>> Hi Folks,
>>> I'd be grateful for good suggestions about the following.
>>>
>>> I'm plotting a fami
, cex = 1.5)
Hope this helps.
Sincerely,
Erin
On Thu, Jun 12, 2008 at 10:38 AM, <[EMAIL PROTECTED]> wrote:
> I have a 2x2 plot set up using: par(mfrow=c(2,2))
> I'd like to put an overall title on the page, but I cannot figure
> out how. Any ideas?
Best wishes,
Ted.
6 1
# 4 4 6 3 9
# 5 11 9 6 5
Hoping this helps,
Ted.
------------
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 20-Jul-08 Time: 12:52:12
-- XFMail --
t; for the X-axis,
and a "pos=2" for the y-axis. And I have not been able to discover
how to do this.
With thanks,
Ted.
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
D
On 21-Jul-08 12:25:32, Marc Schwartz wrote:
> on 07/21/2008 07:13 AM (Ted Harding) wrote:
>> Hi Folks,
>> I've been digging for the solution to this for several
>> hours now. If there is a solution, it must be one of the
>> worst "needle-in-a-
On 21-Jul-08 12:43:47, Duncan Murdoch wrote:
> On 7/21/2008 8:13 AM, (Ted Harding) wrote:
>> Hi Folks,
>> I've been digging for the solution to this for several
>> hours now. If there is a solution, it must be one of the
>> worst "needle-in-a-
__
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
,y) puts x as a row above y as a row, so reading down
the columns alternates between x and y.
Hoping this helps,
Ted.
--------
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 23-Jul-08
-211
# [5,]020200
# [6,]0 -50200
# [7,]0 -50200
> Many thanks for your help!
>
> Best wishes
> Christoph.
E-Mail: (Ted Har
_
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
r and (after printing to file as temp.ps)
I also get the same when I view this PostScript file with gnome-gv,
if instead I send the plot to the postscript device in the first
place, i.e. the first line is
postscript('temp.ps', width=11, height=8.5)
then when I view this temp.ps in g
^2) * (V1/(M1^2) + V2/(M2^2) + ... + Vn/(Mn^2))
so the variance depends on the means Mi as well as on the variances Vi.
Ted.
--------
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094
ample((1:nrow(data)),2),]
to get the samples rows in random order, or
data[sort(sample((1:nrow(data)),2)),]
if you want just a random subset (in the original order).
(only: don't call your dataframe "data" -- that can cause problems.
Use some other name.)
Ted.
---
,
> EX4 4QL, UK
>
> Phone: +44 1392 264187
>
> http://newton.ex.ac.uk/research/emag
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-p
eleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commente
d give it to you. All the tabularii in togas are
throwing up their hands and asking "quem in jus vocabimus?" since
they don't really care if it works so long as they can get their
money back, only there's no money here to get back, is there?
Nescio, Marco,
place 0 by 0.0 and
3 by 3.0, and set rownames=NULL; but I don't see how to get
the counts placed in the "middle" of the range. But it's
better than nothing, I suppose!
Ted.
E-Mail: (Ted Harding) <[EMAIL
.
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 30-Jul-08 Time: 17:19:20
-- XFMail --
__
R-
9273, df = 4, p-value = 0.0631
# alternative hypothesis: true mean is greater than 0
# 95 percent confidence interval:
# -1.803807 Inf
# sample estimates:
# mean of x
#17
Hoping this helps!
Ted.
E-Mail:
e:
if( ()&() ) {
}
Example:
if( (a[x,y]>1.0)&(a[x,y]<2.0) ){
print("Between 1 and 2")
}
Hoping this helps,
Ted.
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 13-Aug-08
# [1] 7.814728
Ted.
--------
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 22-Aug-08 Time: 10:15:36
t;,D="D")
L
# $A
# [1] "A"
# $B
# [1] "B"
# $C
# [1] "Z"
# $D
# [1] "D"
C<-L$C ## extract $C from L
C
# [1] "Z"
C<-"C" ## change it
L$C<-C ## put it back
L
# $A
# [1] "A"
# $B
# [1] "B"
s the values correspond to. These
can be discovered by
names(GLM$fit)
but you don't want them as character strings, so convert them
to integers:
as.integer(names(GLM$fit))
Done! I hope this helps some people.
Ted.
------------
E-Ma
On 26-Aug-08 23:49:37, hadley wickham wrote:
> On Tue, Aug 26, 2008 at 6:45 PM, Ted Harding
> <[EMAIL PROTECTED]> wrote:
>> Hi Folks,
>> This tip is probably lurking somewhere already, but I've just
>> discovered it the hard way, so it is probably worth passing
he explanations of how the latter is achieved in R
are clear and thorough (for a basic example, see the dicussion
of Factors in the first Chapter).
Ted.
------------
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0
[,9] [,10]
# [1,] 1.439278e-16 1.015762e-17
all of which are machine approximations to zero!
Hoping this helps,
Ted.
------------
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Da
I just tell R to never use factors in my data
> frames?
>
> Or any other solution that occurs to people -- maybe this is the wrong
> way to go about reading in fixed width data in this kind of file.
>
> I would appreciate any help.
>
> Asher
SE)
>>
>> But raw1 still has factors! It is an old class data frame:
>>
>>> is(raw1)
>> [1] "data.frame" "oldClass"
>>
>> And it still has levels:
>>> raw1[1,1]
>> [1] Gustav wind
>> 229 Levels: - - -
2345
.
# [1] 1.2345
. <- function(x) x^2
.(12)
# [1] 144
So, unless there is something I don't know about, there is hardly
anything to discuss about "the detailed usage of '.' in R"!
Ted.
---
On 09-Aug-09 16:53:32, Douglas Bates wrote:
> On Sun, Aug 9, 2009 at 11:32 AM, Ted
> Harding wrote:
>> On 09-Aug-09 16:06:52, Peng Yu wrote:
>>> Hi,
>>> I know '.' is not a separator in R as in C++. I am wondering where it
>>> discusses the de
On 09-Aug-09 19:31:47, Duncan Murdoch wrote:
> (Ted Harding) wrote:
> [...]
>> Next -- and this is the real question -- how does R parse the name
>> "summary.glm"? In my naivety, I simply suppose that it looks for
>> an available function whose name is "sum
or Y = X^(-1/2)
Hopingb this helps,
Ted.
------------
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date: 10-Aug-09 Time: 22:53:25
-
en the
variables. To make progress, you would need to find a maximal linearly
independent set (or possibly find the principal components with
nozero weights).
Ted.
--------
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date: 1
randomly choose between
1,2,3,4/2,3,4,5/3,4,5,6/4,5,6,7/5,6,7,8/6,7,8,9/7,8,9,10/
Hence a result Y could be:
A <- min(V)
L <- max(V) - A + 1
M <- (0:(N-L))
Y <- 1 + (V-A) + sample(M,1)
I think this does it!
puts of [1] and [2])
and hence produces a logical vector (because of the first "!").
In other words, the first "!" is applied to the result of
is.na ( f1 ) + !is.na ( f2 ).
In the form given in [3], the parentheses ensure that the logical
negations &qu
the individual P-values in the first instance, since with
several levels somce are bound to have more extreme P-values than
others.
However, your anova() test is an overall test for whether one or more
of the means jointly depart significantly from the Null Hypothesis
that they are all equal. Her
cs' protection from data loss on a crash!
>
> Barry
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
to spell out more detail of the kind of
model you are seeking to fit. What you described is not enough!
Ted.
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date: 21-Aug-09 Time: 18:48:31
-- XFMail --
ts of 'error'?
>>
>> One possibility in this case is to treat this as a multilevel
>> or longitudinal model, with 4 observations "per subject", and
>> correlated error within-subject.
>>
>> But y
; R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
---
-calculation-tp25106048p25106224.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-
lman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
as a matrix M with 500 rows), using 'page(M), you will
again see the result paged in 'less' in a separate window.
This code was suggested by Roger Bivand in response to a query
of mine back in 2003. The URL is
http://finzi.psych.upenn.edu/R/Rhelp02/archive/21642.html
This was an improvement on
for the model you
are simulating. It will almost always not work (for reasons
illustrated above), and even if it appreas to work the result
will be highly unreliable. If in doubt, have a look at what you
are getting, along the line, as illustrated above!
The above reasons almost certainly underlie
, do an R Site Search on "normal mixture" in "Functions"
at:
http://finzi.psych.upenn.edu/nmz.html
You may want to look at
http://finzi.psych.upenn.edu/R/library/mclust/html/00Index.html
("Model-Based Clustering / Normal Mixture Modeling").
Ted.
---
and no dots"
is violated throughout R itself.
Ted.
------------
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date: 28-Aug-09 Time: 14:21:58
-- XFMail --
_
sing "}".
However, in this case (if I keep all the "{...}" for the sake of
structure) I would also tend to "save on lines" with
f <-
function()
{ if (TRUE)
{ cat("TRUE!!\n") } else
{ cat("FALSE!!\n") }
}
which is still c
consequence of small n).
You could get a less conservative interval by using an asymmetrical
interval, e.g. the 2nd and 9th, or the 3rd and 10th, when the
probability would be
1 - pbinom(1,11,1/2) - pbinom(2,11,1/2) = 0.9614258
which is pretty close to the "target
gt; https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
be I overlooked
something ...
(This is prompted by the recent "OT" discussion on "HT vs. HH",
to which I want to respond later).
With thanks,
Ted.
--------
E-Mail: (Ted Harding)
Fax-
ed level.
>>
>> A Site Search in Function on "all subsets" didn't seem to yield
>> anything of the kind, which surprised me. Maybe I overlooked
>> something ...
>>
>> (This is prompted by the recent "OT" discussion on "HT vs. HH"
ix <- (M[,k]==1) ## k must be an H (then k+1 will be H)
for(i in (1:(k-1))){ ix<-ix&( !((M[,i]==1)&(M[,i+1]==1)) ) }
sum(ix)
## list(Count=sum(ix),Which=M[ix,])
}
Now, ignoring the case k=1:
HHcounts <- NULL
fo
27;s not just a bunch of binary instructions to the
> computer. If the meaning and the look of the code clash, it is going
> to lead to problems.
>
> Duncan Murdoch
And surely that is precisely the point of Jim's use of ";"!
It is, in effect, ignored by R; but to J
1 - 100 of 1019 matches
Mail list logo