Hi Jose,
If you are only interested in the expected duration, the problem can be solved
analytically - no simulation is needed.
Let P be the probability to get total.capital (and then 1-P is the probability
to loose all the money) when starting with initial.capital. This probability P
is well k
Hey fellas:
In the context of the gambler's ruin problem, the following R code obtains the
mean duration of the game, in turns:
# total.capital is a constant, an arbitrary positive integer
# initial.capital is a constant, an arbitrary positive integer between, and not
including
# 0 and total.ca
I have 3 row data in the format as x,y,time
I want to draw it as a 3d picture;x as x axis, y as y axis, the xy's
density as z axis, the corresponding time as color.
I have read the manual of R and failed, Since kde2d and bkde2d give
grid data, this grid data cannot match its original time.
Could an
I would like to use a text string to get a reference to an object whose name
is the text string. I have seen people using get() for this purpose, but as
far as I can tell this returns a copy of the object, not a pointer to the
object. For instance, if I were to write
x <- get("z")
attr(x, "age") <
On Thu, 14 Aug 2008, Steven McKinney wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
On Behalf Of [EMAIL PROTECTED]
Sent: Thursday, August 14, 2008 2:00 PM
To: r-help@r-project.org
Subject: [R] Detecting duplicate values
This is another "how do I do it" t
Please clarify the data structures and what you did and enclose the
commands you used.
Consider testing with a small artificial dataset.
Choose your words carefully 'load' is ambiguous
Ferry wrote:
Hello,
I have 4 data frames, say A, B, C, D. Each A, B, C has different columns
set, and D has
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
> On Behalf Of [EMAIL PROTECTED]
> Sent: Thursday, August 14, 2008 2:00 PM
> To: r-help@r-project.org
> Subject: [R] Detecting duplicate values
>
> This is another "how do I do it" type of question. It seems that for
Hello,
I have 4 data frames, say A, B, C, D. Each A, B, C has different columns
set, and D has all of what A, B, and C have.
So, here is the example:
A has c1, c2, and c3
B has c4 and c5
C has c1, c5, and c6
D has c1, c2, c3, c4, c5, and c6.
I want to load these data frames into an Oracle SQL ser
sorry I see what you mean- Gabor has the answer.
On Thu, Aug 14, 2008 at 8:22 PM, stephen sefick <[EMAIL PROTECTED]> wrote:
> What are the comments line at the beginning of the function? if you
> just type in the function name then you get the code, and it seems
> that R removes the comment lines
Try this (provided the comments are actually _in_ the function -- in
the example you gave they are not).
> foo <-
+ function(x){
+ # comments line 1
+ # comments line 2
+ # etc..
+ x <- 2
+ }
>
> grep("^#", attr(foo, "source"), value = TRUE)
[1] "# comments line 1" "# comments line 2" "# etc.."
A
What are the comments line at the beginning of the function? if you
just type in the function name then you get the code, and it seems
that R removes the comment lines to make the code more readable. I
just looked at a function in a package that I created and know that
there are comments in there
Hmm,
couldn't resists:
> X <- NA
> is.logical(X)
[1] TRUE
> (X == TRUE)
[1] NA
> "==.MaybeNA" <- function(e1, e2) { !is.na(e1) && (e1 == e2) }
> X <- structure(NA, class="MaybeNA")
> is.logical(X)
[1] TRUE
> (X == TRUE)
[1] FALSE
Ta da ;)
Henrik
PS. It might be worth mentioning base::isTRUE()
Dear R users,
I am pleased to announce the release of the new BiplotGUI package on
CRAN.
Biplots are graphs in which the samples and all the variables of a data
matrix are represented simultaneously. They can be very useful for
exploring multivariate data.
The BiplotGUI package allows users to c
Hi All,
Imagine that we have a function defined as follows:
foo <-
# comments line 1
# comments line 2
# etc..
function(x){
x <- 2
}
I can use args(foo) to get the arguments for foo. Is there any function to
get
the comment lines at the beginning of the function?
>From what I understand the prom
Im using the edcf function to look at a number of empirical distributions
graphically for run-time analyses of stochastic optimization algorithms.
When dealing with problems where the optimal solution for these problems is
always found everything is fine and the graphs are very useful for
comparat
On Thu, 14 Aug 2008, Farley, Robert wrote:
BINGO!
str(SurveyData$direction_)
Factor w/ 2 levels "EASTBOUND
",..: 1 1 1 1 2 2 1 1 2 1 ...
levels(SurveyData$direction_)
[1] "EASTBOUND " "WESTBOUND
"
Was my mistake in how I read the data?
SurveyDa
# Trim white space (leading and/or trailing). Includes tabs.
trim <- function(str)gsub('^[[:space:]]+', '', gsub('[[:space:]]+$', '', str))
#
levels(SurveyData$direction_) <- trim(levels(SurveyData$direction_))
> Date: Thu, 14 Aug 2008 15:55:38 -0700
> From: [EMAIL PROTECTED]
> To: r-help@r-proj
BINGO!
> str(SurveyData$direction_)
Factor w/ 2 levels "EASTBOUND
",..: 1 1 1 1 2 2 1 1 2 1 ...
> levels(SurveyData$direction_)
[1] "EASTBOUND " "WESTBOUND
"
>
Was my mistake in how I read the data?
SurveyData <- read.spss("C:/Data/R/orange_delivery.sav
Hi Kevin,
I learned to use tryCatch() from this page:
http://www1.maths.lth.se/help/R/ExceptionHandlingInR/
Hope this helps,
ST
- Original Message
From: Luke Tierney <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: r-help@r-project.org
Sent: Thursday, August 14, 2008 8:59:35 AM
Subject
I can't tell exactly what's wrong, just check out the ?str and ?levels
functions for some guidance.
Farley, Robert wrote:
I can't figure out the syntax I need to get subset to work. I'm trying
to split my dataframe into two parts. I'm sure this is a simple issue,
but I'm stumped. I either ge
My guess is that you maybe have some leading or trailing spaces in the
direction_ variable.
Try a tabulation of the direction_ variable to see exactly what values are
in your data set.
David Scott
On Thu, 14 Aug 2008, Farley, Robert wrote:
I can't figure out the syntax I need to get sub
I can't figure out the syntax I need to get subset to work. I'm trying
to split my dataframe into two parts. I'm sure this is a simple issue,
but I'm stumped. I either get all or none of the original "rows".
> XTTable <- xtabs( ~ direction_ , SurveyData)
> XTTable
direction_
I just noticed a certain ``usage'' in a recent posting, and couldn't
restrain my self from commenting. The usage was of the form
``if(X==TRUE)''
where X was a logical variable.
This sort of thing is brought to you by your Department of Redundancy
Department. The ``==TRUE'' bit is irr
Dear R-users,
I am having a problem with passing an argument to a function in a do.call
function which itself is in a for statement.
I am building a function to upload question pools to the blackboard learning
environment.
This function which transforms questions to XML style output should b
I didn't describe the problem clearly. It's about the number of distinct
values. So just ignore cycle issue.
My tests were:
RNGkind(kind="Knuth-TAOCP");
sum(duplicated(runif(1e7))); #return 46552
RNGkind(kind="Knuth-TAOCP-2002");
sum(duplicated(runif(1e7))); #return 46415
#These collision f
Hi,
I am new to R, and am trying to solve the following optimization problem:
This is a nonlinear least squares problem. I have a set of 3D voxels. All I
need is to find a least squares fit to this data. The data model actually
represent a cube-like structure, consisting of seven straight lines.
I don't want to belabor the point, but these are basically bad ideas. IEEE
floating point arithmetic is specifically designed to handle (at least the
usual) situations of division by exact 0, log(0), 0/0 and so forth with
special constants like Inf, -Inf and NaN (see ?Inf). Testing and converting
Shengqiao Li wrote:
Hello all,
I am generating large samples of random numbers. The RNG help page says:
"All the supplied uniform generators return 32-bit integer values that are
converted to doubles, so they take at most 2^32 distinct values and long
runs will return duplicated values." But
This is another "how do I do it" type of question. It seems that for a function
that I have it has a hard time with lists that have a single repeated value. I
want a function (or expression) that will detect this condition. I want to
detect:
c(2,2,2,2,2)
OR
c(1)
The following is OK
c(2,3,3,2
Sorry for jumping in. I haven't read the entire conversation. But imagine you
want to compute 1/x for the entire matrix and your matrix has 0s and NAs.
Then you could do:
##define function
f<-function(x){1/x}
##sample data
y=c(0,1,2,3,4,5,6,7,NA)
##arrange in matrix
mat=matrix(y,3,3)
##appl
look at zoo and roll mean
On Thu, Aug 14, 2008 at 3:52 PM, William Pepe <[EMAIL PROTECTED]> wrote:
>
> Dear all. I have data that looks like this:
>
> Biller Cycle Jan Feb Mar Apr May JuneAB 1 100 150 150 200 300 450JL
> 2 650 600 750 700 850 800JL3 700 740 680 690 700
Shengqiao Li wrote:
Hello all,
I am generating large samples of random numbers. The RNG help page
says: "All the supplied uniform generators return 32-bit integer
values that are converted to doubles, so they take at most 2^32
distinct values and long runs will return duplicated values." But
Dear all. I have data that looks like this:
Biller Cycle Jan Feb Mar Apr May JuneAB 1 100 150 150 200 300 450JL
2 650 600 750 700 850 800JL3 700 740 680 690 700 580IR1
455 400 405 410 505 550IR4 600 650 700 750 650 680IR5 100
150 120 1
on 08/14/2008 04:53 AM Barry Rowlingson wrote:
> 2008/8/14 Marc Schwartz <[EMAIL PROTECTED]>:
>
>>> I think it's an Ubuntu bug, because nothing like it occurs anywhere else.
>>> So I'd suggest you turn off compiz or switch to a reliable OS like Windows
>>> ;-).
>> Gack... ;-)
>
> What do you r
On Thu, Aug 14, 2008 at 2:30 PM, Jason Pare <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I am searching for the best method to plot two variables with points
> whose output color depends on the size of a third variable. For
> example, the darkness of the x-y point would increase incrementally
> based on
Hello,
I am searching for the best method to plot two variables with points
whose output color depends on the size of a third variable. For
example, the darkness of the x-y point would increase incrementally
based on the size of the z value, similar to the colramp parameter in
geneplotter. This wo
on 08/14/2008 01:33 PM Frank E Harrell Jr wrote:
> For all those useRs who didn't make it to Dortmund Germany for useR!
> 2008 you missed a great conference. Hats off to Uwe Ligges and the
> other organizers and assistants who planned and executed the meeting
> superbly.
>
> Frank
Hear! Hear!
For all those useRs who didn't make it to Dortmund Germany for useR!
2008 you missed a great conference. Hats off to Uwe Ligges and the
other organizers and assistants who planned and executed the meeting
superbly.
Frank
--
Frank E Harrell Jr Professor and Chair School of Medicine
Hi: If I remember correctly, I think I gave you something like mat[mat
== 0]<-NA. I think what you're doing below is pretty different from
that but I may not be understanding what you want ? Let me know if I can
clarify more because my intention was not to guide you
into doing below. Looping is
Right you are! Thank you for finding my mistake. I have been trying all
sorts of combinations and I just dropped the ball. Thank you and thanks to
Professor Ripley!
Charles Annis, P.E.
[EMAIL PROTECTED]
phone: 561-352-9699
eFax: 614-455-3265
http://www.StatisticalEngineering.com
-Origi
Hello all,
I am generating large samples of random numbers. The RNG help page says:
"All the supplied uniform generators return 32-bit integer values that are
converted to doubles, so they take at most 2^32 distinct values and long
runs will return duplicated values." But I find that the cycle
Professor Ripley:
Not quite. Here is what works, followed by what doesn't:
What works:
> par <- NIM.results$par
> list(par[1], par[2], par[3], par[4], par[5], a.hat.decision,
noise.threshold, a.hat.vs.a.data)
[[1]]
[1] 16.91573
[[2]]
[1] 0.9176942
[[3]]
[1] 1.715070
[[4]]
[1] 39.69884
[[5]]
[
Hi
[EMAIL PROTECTED] napsal dne 14.08.2008 16:34:32:
>
> Hi, I am working with R and I have some doubts. I working with daily
data of
> MNEE parameter for 6 years (2001:2006). What I am trying to do is to
replace
> missing data for a particular period of time of a particular year, with
the
Read the argment descriptions and Look at the examples in ?tryCatch.
The `expr' argument (i.e. the code to try) and the `finally' argument
are expressions that are evaluated (via standard lazy evaluation of
arguments). The error condition handlers, provided as the `...'
argument in errorClass = h
On Thu, 14 Aug 2008, Charles Annis, P.E. wrote:
Thank you Professor Ripley:
From your response it is clear that I left out something important, for
which I apologize.
Part of the et cetera is another list. Your method, which would work
otherwise, also converts the other list to its members.
Thank you Professor Ripley:
>From your response it is clear that I left out something important, for
which I apologize.
Part of the et cetera is another list. Your method, which would work
otherwise, also converts the other list to its members. The program being
called has an argument list like
Hi
[EMAIL PROTECTED] napsal dne 14.08.2008 16:05:11:
>
> I need to do a non-linear regression in the form of
>
> Y = a0 + a1 * arctan(a2 * x) + error.
>
> A data sample (X,Y) is available, but I can't remember how to run this
sort
> of regression through R so that I get a value for a0, a1 an
Would something like this work?
my.list <- as.list(c(NIM.results$par, a.hat.decision, etc))
do.call("Draw.NIM.POD.curve", my.list)
-Christos Hatzis
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Charles Annis, P.E.
> Sent: Thursday, August 14, 2
Hello r-help,
As the title suggests, I'm attempting to fit a negative binomial GLM
with a fixed dispersion parameter.
Both glm.nb() and glm(..., family=negative.binomial(theta, ...)) (using
MASS) do not appear to allow this; upon specifying a value for theta,
each then proceeds to re-estimate it.
Try c(as.list(par), a.hat.decision, et cetera ...)
We are guessing what any of these are, of course.
On Thu, 14 Aug 2008, Charles Annis, P.E. wrote:
R-ians:
After some effort I coerced my code to do what I want but my syntax is a
kludge. Suggestions on more elegant syntax?
par
I need to do a non-linear regression in the form of
Y = a0 + a1 * arctan(a2 * x) + error.
A data sample (X,Y) is available, but I can't remember how to run this sort
of regression through R so that I get a value for a0, a1 and a2.
Can someone please give me a hint?
Thank you in advance.
__
Steve,
thank you so much for very very useful helps and comments!
for my research I used a surrogate approach (number of surrogates=100) and I
made a comparison among index values (very similar to a correlation index
computed with a second known signal ) from my original data and from
surrog
You haven't given us the 'at a minimum' information that the posting guide
requested, so we don't know your timezone. But in America/Toronto on my
F8 machine
d <- "2007-11-04 01:30:00"
dd <- as.POSIXct(d)
c(dd, dd+3600)
[1] "2007-11-04 01:30:00 EDT" "2007-11-04 01:30:00 EST"
Note the change
Hi,
I am looking at the effects of two explanatory variables on chlorophyll.
The data are an annual time-series (so are autocorrelated) and the
relationships are non-linear. I want to account for autocorrelation in
my model.
The model I am trying to use is this:
Library(mgcv)
gam1 <-g
R-ians:
After some effort I coerced my code to do what I want but my syntax is a
kludge. Suggestions on more elegant syntax?
par <- NIM.results$par
do.call("Draw.NIM.POD.curve", list(par[1], par[2], par[3], par[4],
par[5], a.hat.decision, et cetera ...
It seems that
Hi Paul,
Unfortunately, the book is a bit ahead of the current version of
ggplot2 code, and it's a bit of pain to do in the 0.6 code. Your best
bet is to have a look at grid.grab to capture the grob before it is
drawn.
In the development version the new theming support lets completely
turn off m
On Wed, 13 Aug 2008, Moshe Olshansky wrote:
Since 0 can be represented exactly as a floating point number, there is no
problem with something like x[x==0].
What you can not rely on is something like 0.1+0.2 == 0.3 to be TRUE.
As I tried to indicate in my previous email, it is more complicated
Hi there,
I'm having a problem with as.dist when I tried to convert a numerical
matrix to dist. The data matrix is 10^4 by 10^4. I got the following:
d <- as.dist(dat)
Error: cannot allocate vector of size 762.9Mb
I need convert "dat" to dist because I will use hclust to do some
clustering a
I would like to use the 'tryCatch' function but am having a hard time getting
my head around it. In 'C' like languages try/catch/finally means try a block of
statements and if any throw an error then do the statements in the catch block
and then error or not always do the staements in the finall
Hi,
I am computing some time differences.
Using the linux version of R 2.7.1
And I am getting a strange result ( see below )
I need the difference in minutes.
Actually looking for where it is NOT 15 minutes.
Would anyone know why this could be happening?
Or should I do this another way?
Bill
Th
Hi, I am working with R and I have some doubts. I working with daily data of
MNEE parameter for 6 years (2001:2006). What I am trying to do is to replace
missing data for a particular period of time of a particular year, with the
mean of that variable for the rest of the years during that given
Hi Richard,
It is if you use Rattle. Rattle allows you to do that for quite a few types
of models and has a really nice GUI interface. It has been developed for
the purposes of datamining, and given your use of the term "score" in your
post, I assume in part that this is what you are looking
I am not sure that format solves the problem (although again I may
well be missing something)
# trailing zeros on large numbers
>format(vec, digits=4, scientific=F)
[1] " 0.8000" "123.4567" " 0.1235" " 7.6543"
"7654321."
# misses trailing zeros
> format(vec[1], digits
Barry Rowlingson lancaster.ac.uk> writes:
>
> 2008/8/14 Marc Schwartz comcast.net>:
>
> >> I think it's an Ubuntu bug, because nothing like it occurs anywhere else.
> >> So I'd suggest you turn off compiz or switch to a reliable OS like Windows
> >> .
> >
> > Gack...
>
>
[I'm not sure
Many thanks Daniel,
Tolga
"Daniel Malter" <[EMAIL PROTECTED]>
13/08/2008 21:22
To
<[EMAIL PROTECTED]>,
cc
Subject
AW: [R] which alternative tests instead of AIC/BIC for choosingmodels
your model 3 is the unrestricted model and your models 1 and 2 are
restricted models. you can test mo
Dear Prob. Ripley,
Thanks for this, now appreciating the point about Cp significantly more.
Tolga
Prof Brian Ripley <[EMAIL PROTECTED]>
13/08/2008 21:29
To
[EMAIL PROTECTED]
cc
r-help@r-project.org
Subject
Re: [R] which alternative tests instead of AIC/BIC for choosing models
Cp is eith
I think you missed the function called 'format'. R's internal print
routines (used by format) calculate the format passed to (C level) sprintf
based on the input, including 'digits'. Just make sure you pass one
number at a time to format() if you don't want a common layout for all the
numbers
Hi,
I want to remove minor-horizontal and minor-vertical lines from a plot
generated with ggplot2. I can do this after the plot has been drawn
using grid.gremove.
However, I would like to do it before drawing my plot. The ggplot2 book
refers to a ggplotGrob function which creates a grob fr
Richard Palmer gmail.com> writes:
>
> .. I am new to R but experienced in SAS. SAS has the capability to let me
> develop a model from a sample and use the results to score the records of
> another file which won't fit in memory. Is this straightforward in R or
> does it require coding to do t
maybe you can do something about like this:
#height.r script beg
SourceFileDir <- "f:/documents/data & fits/THz Imaging Lab/r scripts"
SourceFileName <- "height.r"
source(file.path(SourceFileDir, SourceFileName))
# height.r script end
savehistory()
FileNameSourced <- gsub("source\\(|\\)|\"", "",
Hi everyone,
I can't figure out how to format numbers to have a certain number of
significant figures (as opposed to decimal places) without using
scientific notation and including trailing zeros if necessary.
e.g. I would like to achieve the following:
0.81 ---> 0.8000
123.4567--
Dear R users,
is there a hack how to get the filename of the current script.r sourced/ran?
My issue: I have a couple of scripts which were optimised and are placed in
tens of directories. (I have a height.r script in 30 directories, a lines.r
script in 25 directories and another flow.r script in
Now this is really specific. I think the cause of the error is a small sample
size. For example. The following both fail:
fit <- fitdistr(c(120), "weibull")
fit <- fitdistr(jiitter(c(120,120), amount=0.5), "weibull")
As it is hard for me to control the sample size or the proximity of data values
.. I am new to R but experienced in SAS. SAS has the capability to let me
develop a model from a sample and use the results to score the records of
another file which won't fit in memory. Is this straightforward in R or
does it require coding to do the scoring in segments? Can someone point me
t
On Thu, 2008-08-14 at 16:16 +0530, Yogesh Tiwari wrote:
> Hello R Users,
> I am using R on Windows,
> I am ploting CO2 (350,380) variying with year(1993,2003)
> I want to over-plot rainfall (10,250) varying with year(1993,2003) on axis-4
>
> axis-1=year(1993,2003),
> axis-2=CO2(350,380)
> axis-3
FAQ #3 in
library(zoo)
vignette("zoo-faq")
shows how as does one of the examples in ?plot.zoo
Also look at twoord.plot in the plotrix package.
On Thu, Aug 14, 2008 at 6:46 AM, Yogesh Tiwari
<[EMAIL PROTECTED]> wrote:
> Hello R Users,
> I am using R on Windows,
> I am ploting CO2 (350,380) vari
Hi R users,
I already had post this question under the title of bmp header, there was a
mistake in my post. The following is the same post without the mistake.
Thanks Rostam
I have a xml file. A value of one of the nodes of the xml file is a bmp
image encoded in base64. I would like to read this
You could look at how the same problem is dealt with in package "financial".
Paul Bivand
2008/8/1 Moshe Olshansky <[EMAIL PROTECTED]>
>
> You can use uniroot (see ?uniroot).
>
> As an example, suppose you have a $100 bond which pays 3% every half year (6%
> coupon) and lasts for 4 years. Suppose
Hello R Users,
I am using R on Windows,
I am ploting CO2 (350,380) variying with year(1993,2003)
I want to over-plot rainfall (10,250) varying with year(1993,2003) on axis-4
axis-1=year(1993,2003),
axis-2=CO2(350,380)
axis-3=None
axis-4=rainfall(10,250)
Kindly help how to over-plot another varia
giov,
It sounds like you have approximately symmetric distributions. If that
is so, and particularly if the standard deviation is less than about 20%
of the mean, I'll stick my neck out and say I would assume underlying
normality for outlier testing purposes unless there's a reason to do
otherwise
On Thu, Aug 14, 2008 at 07:46:41PM +1000, Jim Lemon wrote:
> On Wed, 2008-08-13 at 19:14 -0700, Mark Home wrote:
> > Dear All:
> >
> > I have a clinical study where I would like to compare the demographic
> > information for 2 samples in a study. The demographics include both
> > categorical an
2008/8/14 Marc Schwartz <[EMAIL PROTECTED]>:
>> I think it's an Ubuntu bug, because nothing like it occurs anywhere else.
>> So I'd suggest you turn off compiz or switch to a reliable OS like Windows
>> ;-).
>
> Gack... ;-)
What do you recommend for multiple desktops in Windows? I've
currently
On Wed, 2008-08-13 at 19:14 -0700, Mark Home wrote:
> Dear All:
>
> I have a clinical study where I would like to compare the demographic
> information for 2 samples in a study. The demographics include both
> categorical and continuous variables. I would like to be able to say whether
> the
Em Qua, 2008-08-13 às 19:14 -0700, Mark Home escreveu:
> Dear All:
>
> I have a clinical study where I would like to compare the demographic
> information for 2 samples in a study. The demographics include both
> categorical and continuous variables. I would like to be able to say whether
> t
Use a smaller alpha value rather than 0.05.
C
On Thu, Aug 14, 2008 at 10:14 AM, Mark Home <[EMAIL PROTECTED]> wrote:
> Dear All:
>
> I have a clinical study where I would like to compare the demographic
> information for 2 samples in a study. The demographics include both
> categorical and con
Peter Dalgaard wrote:
Pedro Mardones wrote:
Thanks for the reply. The SAS output is attached but seems to me that
doesn't correspond to the wihtin-row contrasts as you suggested. By
the way, yes the data are highly correlated, in fact each row
correspond to the first part of a signal vector. Tha
Peter Dalgaard wrote:
Pedro Mardones wrote:
Thanks for the reply. The SAS output is attached but seems to me that
doesn't correspond to the wihtin-row contrasts as you suggested. By
the way, yes the data are highly correlated, in fact each row
correspond to the first part of a signal vector. Tha
87 matches
Mail list logo