Here's an interesting article:
Collections in R: Review and Proposal
Timothy Barry
The R Journal
doi: 10.32614/RJ-2018-037
https://journal.r-project.org/archive/2018/RJ-2018-037/RJ-2018-037.pdf
On Tue, Nov 2, 2021 at 10:48 PM Yonghua Peng wrote:
>
> I know this is a ne
There are some differences in R, between Windows and Linux.
You could try the 'shell' command instead.
#On Windows
?shell
On Tue, Aug 24, 2021 at 4:53 AM Anas Jamshed wrote:
>
> I have the file GSE162562_RAW. First I untar them
> by untar("GSE162562_RAW.tar")
> then I am running like:
> system(
I meant:
x0 = c (1, 1e-3, 0)
Not:
x0 = c (1, 1e6, 0)
So, large intentional error may work too.
Possibly, better...?
On Thu, May 27, 2021 at 6:00 PM Abby Spurdle wrote:
>
> If I can re-answer the original post:
> There's a relatively simple solution.
> (For these problems, at l
cient solutions, that the package
maintainers may (or may not) want to address.
On Thu, May 27, 2021 at 3:27 PM Abby Spurdle wrote:
>
> I need to retract my previous post.
> (Except the part that the R has extremely good numerical capabilities).
>
> I ran some of the examples, an
I need to retract my previous post.
(Except the part that the R has extremely good numerical capabilities).
I ran some of the examples, and Hans W was correct.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mai
l chosen
mathematical and statistical graphics.
On Sun, May 23, 2021 at 5:25 PM Abby Spurdle wrote:
>
> For a start, there's two local minima.
>
> Add to that floating point errors.
> And possible assumptions by the package authors.
>
> begin code
> f <- function (x
Sorry, missed the top line of code.
library (barsurf)
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provi
ou can guess
> from the equality constraint. And 'auglag()' does find the minimum, so
> no need for a special approach.
>
> I was/am interested in why all these other good solvers get stuck,
> i.e., do not move away from the starting point. And how to avoid this
> in gen
Sorry, this might sound like a poor question:
But by "on the unit sphere", do you mean on the ***surface*** of the sphere?
In which case, can't the surface of a sphere be projected onto a pair
of circles?
Where the cost function is reformulated as a function of two (rather
than three) variables.
I couldn't find a predefined function for this purpose.
However, it wouldn't be too difficult to write a pair of functions.
The big question is how flexible does the rendering function need to be?
#plot from angles, distances, etc
#(angles on arbitrary scale)
spiechart.render <- function (
an
ump is from non-A-grade to A-grade, not non-pass to pass.
On Sat, Mar 27, 2021 at 10:00 PM Abby Spurdle wrote:
>
> Sorry.
> I just realized, after posting, that the "n" value in the dispersion
> calculation isn't correct.
> I'll have to revisit the simulation,
Sorry.
I just realized, after posting, that the "n" value in the dispersion
calculation isn't correct.
I'll have to revisit the simulation, tomorrow.
On Sat, Mar 27, 2021 at 9:11 PM Abby Spurdle wrote:
>
> Hi Rolf,
>
> Let's say we have a course called Co
Hi Rolf,
Let's say we have a course called Corgiology 101, with a single moderated exam.
And let's say the moderators transform initial exam scores, such that
there are fixed percentages of pass rates and A grades.
Rather than count the number of passes, we can count the number of "jumps".
That i
Hi Stefano,
My package, vectools, is partly designed for this purpose.
(Unfortunately, the package *is* subject to *change*, and some of the
functions may change in the next update).
library (vectools)
which.maxs (x, ret.type="intervals")[,1] # c (8, 10, 13)
which.mins (x, ret.type="intervals")[,
I haven't checked this, but I guess that the number of students that
*pass* a particular exam/subject, per semester would be like that.
e.g.
Let's say you have a course in maximum likelihood, that's taught once
per year to 3rd year students, and a few postgrads.
You could count the number of passe
re always twist the original data and spits only
> descriptive results.
>
> All your results are quite consistent with the available values as they are
> close to 1, so for me, each approach works.
>
> Thank you again.
>
> Best regards.
> Petr
>
> > -Origina
everything:
u <- seq (0.01, 1.65,, 200)
v <- plnorm (u, mean (x), sd (x), FALSE)
plot (u, v, type="l", ylim = c (0, 1) )
points (temp$size, temp$percent, pch=16)
points (0.1, psolution, pch=16, col="blue")
On Sat, Mar 6, 2021 at 8:09 PM Abby Spurdle wrote:
>
> I
spect that there's a relatively easy way of finding the parameters.
I'll think about it...
But someone else may come back with an answer first...
On Sat, Mar 6, 2021 at 8:17 AM Abby Spurdle wrote:
>
> I note three problems with your data:
> (1) The name "percent" is mislea
wrote:
>
> Sounds like you always got corrupted vesions. Either an issue with your
> connection or the mirror. What happens if you try another mirror and
> clear your browser caches?
>
> Best,
> Uwe Ligges
>
> On 05.03.2021 08:58, Abby Spurdle wrote:
> > Does the
I note three problems with your data:
(1) The name "percent" is misleading, perhaps you want "probability"?
(2) There are straight (or near-straight) regions, each of which, is
equally (or near-equally) spaced, which is not what I would expect in
problems involving "quantiles".
(3) Your plot (appro
Does the following sound familiar?
The Windows installer starts installing (or decompressing) R, flashing
one file name at a time.
And then, part way through, says file is corrupt, and gives you
the choice to ignore.
And if you click ignore, then the next file does the same thing.
And one qui
I can't help but feel that a discussion on the merit of BMI is a
digression, from the OP's question.
In addition, to being of no relevance to "R Programming".
In relation to Richard's technical comments:
As per my previous post, it is possible to get *relative" measures.
(Assuming the images are n
Hi Paul,
If the background is relatively uniform:
Then a simple algorithm could be used, to distinguish foreground from
background points.
Essentially, returning a logical matrix.
Otherwise, I'm assuming that suitable pooling/convolution operations
could be used for this purpose.
Then you could
ot;, slots = list (p="numeric") )
Quad.S4 <- function (p = c (0, 0, 1) )
{ f <- function (x)
{ this <- sys.function ()
p <- this@p
p [1] + p [2] * x + p [3] * x^2
}
new ("Quad.S4", f, p=p)
}
f.s4 <- Quad.S4 ()
plotf (f.s4)
f.s4@p
On Tue,
and if in
agreement, to suggest the best forum for such a discussion.
On Fri, Jan 29, 2021 at 4:42 AM Martin Maechler
wrote:
>
> >>>>> Abby Spurdle
> >>>>> on Thu, 28 Jan 2021 08:48:06 +1300 writes:
>
> > I note that there's a possibilit
cisci
wrote:
>
> Wonderful!
> This is exactly what I need!
> Thank you very much!!
>
> Denis
>
>
>
> Il giorno mer 27 gen 2021 alle ore 10:58 Abby Spurdle
> ha scritto:
>>
>> u <- runif (410)
>> u <- (u - min (u) ) / diff (range (u) )
>
I got 16.60964.
Your curve is not linear up to the 39th point.
And as your points appear to be deterministic and nonlinear, splines
are likely to be easier to use.
Here's a base-only solution (if you don't like my kubik suggestion):
g <- splinefun (X, Y)
f <- function (x) g (x) - 6
uniroot (f, c
u <- runif (410)
u <- (u - min (u) ) / diff (range (u) )
constrained.sample <- function (rate)
{ plim <- pexp (c (9.6, 11.6), rate)
p <- plim [1] + diff (plim) * u
qexp (p, rate)
}
diff.sum <- function (rate)
sum (constrained.sample (rate) ) - 4200
rate <- uniroot (diff.sum, c (1,
You could use a spline to interpolate the points.
(And I'd consider increasing the number of points if possible, say to 200).
Then use a root finder, such as uniroot(), to solve for
f(i) - k
Where, k (a constant), would be 1e6, based on your example.
There are a number of variations on this appro
Sorry, Bert.
The fitdistr function estimates parameters via maximum likelihood.
(i.e. The "lognormal" part of this, is not a kernel).
On Fri, Jan 22, 2021 at 5:14 AM Bert Gunter wrote:
>
> In future, you should try to search before posting. I realize that getting
> good search terms can sometime
It's not clear from your post why you're trying to contact the maintainer.
But it gives the impression you're trying to contact the maintainer,
of an archived package, because you can't install the package.
It's not their responsibility to respond to these kinds of questions.
Also, I note the most
And it was supposed to say billions.
plt (main="Monthly NZ GDP (Billions)")
On Sat, Jan 2, 2021 at 2:32 PM Abby Spurdle wrote:
>
> I'm not enthusiastic about nonstandard evaluation and allowing
> functions to change state data.
> Currently, I use some of this
I'm not enthusiastic about nonstandard evaluation and allowing
functions to change state data.
Currently, I use some of this in my own packages, but I'm planning to
remove most of it.
But I did have some fun with your function.
--
plt <- memify (plot)
x <- 1:12
y1 <- seq (0, 18,, 12)
y2
package
> RQuantLib (written by the incredible Dirk Eddelbuettel.)
>
> Best,
> Eric
>
>
>
> On Thu, Dec 24, 2020 at 10:34 PM Abby Spurdle wrote:
>>
>> Dear All,
>>
>> One of the most significant contributors to open source finance is:
>>
>&
Dear All,
One of the most significant contributors to open source finance is:
Diethelm Würtz
https://comp.phys.ethz.ch/news-and-events/nc/2016/08/in-memoriam-diethelm-wuertz.html
And on that note, I'd like to wish Merry Christmas to a great
mathematician and programmer, and his family.
A quick
Hi Chao Liu,
I'm having difficulty following your question, and examples.
And also, I don't see the motivation for increasing, then decreasing
the sample sizes.
Intuitively, one would compute the correct sample sizes, first time round...
But I thought I'd add some comments, just in case they're u
Dear list,
I've been writing R-based software for image and spatial data. And
I've decided to support a dual implementation with both base graphics
and grid graphics, and the possibility of other graphics systems
later.
Also, I've decided to plot raster images with the vertical axis
flipped. Whic
Hi Eduard,
> Now I developed a service that executes Rscript (Using ProcessBuilder),
> sends text to stdin of the process and reads from stdout of the
> process.
This doesn't answer your question, but may be relevant.
I have a java-based application that works on a similar principle.
(The code is
If I copy a paste into Consulus (my Java/Swing based software), I get a square.
(Screenshot, attached).
But the interesting thing is, that there's a different result, running
the code via the source function (with the defaults anyway), from
piping it in.
___
I have no idea what everyone's talking about.
What invisible character
The black triangle triangle with a question mark renders fine in (my) gmail.
And it's a unicode character used when there was a problem reading
(presumably text) data.
https://en.wikipedia.org/wiki/Specials_(Unicode_block)
> Surely these colors can be changed
> to something less offensive- my suggestion is "blush."
> How can I find out who to contact about making this happen?
Yes, they can.
blush <- "#CD5C5C"
mycols <- function () { #your code here...
I note that:
(1)
Changing existing code (esp in base p
I've come to the conclusion this whole thing was a waste of time.
This is after evaluating much of the relevant information.
The main problem is a large number of red herrings (some in the data,
some in the context), leading pointless data analysis and pointless
data collection.
It's unlikely that
I've updated the dataset.
(Which now includes turnout and population estimates).
Also, I've found some anomalous features in the data.
(Namely, more "straight lines" than what I would intuitively expect).
The dataset/description are on my website.
(Links at bottom).
#
RESENT
INITIAL EMAIL, TOO BIG
ATTACHMENTS REPLACED WITH LINKS
I created a dataset, linked.
Had to manually copy and paste from the NY Times website.
> head (data, 3)
STATE EQCOUNTY RMARGIN_2016 RMARGIN_2020 NVOTERS_2020 SUB_STATEVAL_2016
1 Alabama Mobile 13.3 12
> such a repository already exists -- the NY Times, AP, CNN, etc. etc. already
> have interactive web pages that did this
I've been looking for presidential election results, by ***county***.
I've found historic results, including results for 2016.
However, I can't find such a dataset, for 2020.
> What can you tell me about plans to analyze data from this year's
> general election, especially to detect possible fraud?
I was wondering if there's any R packages with out-of-the-box
functions for this sort of thing.
Can you please let us know, if you find any.
> I might be able to help with
Hi Berina,
I'm not an expert on genetics.
I haven't looked at the package.
And I've only glanced at your question.
So, this is probably not the best response.
But as no one else has responded, here's some comments:
(1)
Have you checked if there's a function in the package to do what you want?
T
> It should be a 2D slice/plane embedded into a 3D space.
I was able to come up with the plot, attached.
My intention was to plot national boundaries on the surface of a sphere.
And put the slice inside.
However, I haven't (as yet) worked out how to get the coordinates for
the boundaries.
Let me
If you have "value" as a function of latitude and radius, isn't that a
2D (not 3D) scalar field?
Which can be plotted using a regular heatmap.
If you want a curved edge where depth=0 (radius=?), that's not too
difficult to achieve.
Not quite sure what continent boundaries mean in this context, but
dmixgampar <- function (x, param1, param2, ...)
{
#compute density at x
}
On Wed, Oct 21, 2020 at 8:03 PM Charles Thuo wrote:
>
> Dear Sirs,
>
> The below listed code fits a gamma and a pareto distribution to a data set
> danishuni. However the distributions are not appropriate
> SNP$density <- get_density(SNP$mean, SNP$var)
> > summary(SNP$density)
>Min. 1st Qu. MedianMean 3rd Qu.Max.
> 0 383 696 73811701789
This doesn't look accurate.
The density values shouldn't all be integers.
And I wouldn't expect the smallest density to be ze
s <- s + geom_density_2d() + geom_point() + my.theme + ggtitle("SNPs")
> >>
> >> versus what is in the data:
> >>
> >> > head(SNP)
> >>mean var sd
> >> FQC.10090295 0.0327 0.002678 0.0517
> >> FQC.1011
> My understanding is that this represents bivariate normal
> approximation of the data which uses the kernel density function to
> test for inclusion within a level set. (please correct me)
You can fit a bivariate normal distribution by computing five parameters.
Two means, two standard deviation
Running R on Chromebook, is contrary to the way Chromebook is designed to work.
In theory, R can be run off a server.
Then people/students can not only access it from Chromebook, but from
their phones.
Unfortunately, this is not a topic I'm familiar with, so can't offer specifics.
But, from an edu
rep(0,2*J-3)
>
> for(j in 2:nrow(Amat)){
> Amat[j-1,j-1] = -1
> Amat[j,j-1] = 1
> }
>
> for(j in 3:nrow(Amat)){
> Amat[j,J+j-3] = -1/(Q[j]-Q[j-1])
> Amat[j-1,J+j-3] = 1/(Q[j]-Q[j-1])
> Amat[j-2,J+j-3] = -1/(Q[j-1]-Q[j-2])
> }
>
> for(j in 2:nrow(
> I'm trying to replicate a C++ code with R.
Notes:
(1) I'd recommend you make the code more modular.
i.e. One function for initial data prep/modelling, one function for
setting up and solving the QP, etc.
This should be easier to debug.
(However, you would probably have to do it to the C++ code f
I was wondering if you're trying to fit a curve, subject to
monotonicity/convexity constraints...
If you are, this is a challenging topic, best of luck...
On Tue, Sep 22, 2020 at 8:12 AM Abby Spurdle wrote:
>
> Hi,
>
> Sorry, for my rushed responses, last night.
> (Shouldn
Hi,
Sorry, for my rushed responses, last night.
(Shouldn't post when I'm about to log out).
I haven't used the quadprog package for nearly a decade.
And I was hoping that an expert using optimization in finance in
economics would reply.
Some comments:
(1) I don't know why you think bvec should b
One more thing, is bvec supposed to be a matrix?
Note you may need to provide a reproducible example, for better help...
On Mon, Sep 21, 2020 at 10:09 PM Abby Spurdle wrote:
>
> Sorry, ignore the last part.
> What I should have said, is the inequality has the opposite sign.
>
Sorry, ignore the last part.
What I should have said, is the inequality has the opposite sign.
>= bvec (not <= bvec)
On Mon, Sep 21, 2020 at 10:05 PM Abby Spurdle wrote:
>
> Are you using the quadprog package?
> If I can take a random shot in the dark, should bvec be -bvec?
>
Are you using the quadprog package?
If I can take a random shot in the dark, should bvec be -bvec?
On Mon, Sep 21, 2020 at 9:28 PM Maija Sirkjärvi
wrote:
>
> Hi!
>
> I was wondering if someone could help me out. I'm minimizing a following
> function:
>
> \begin{equation}
> $$\sum_{j=1}^{J}(m_{j}
Hi H,
I probably owe you an apology.
I was just reading the geom_contour documentation.
It's difficult to follow.
Base R functions, my functions, and pretty much everyone's functions,
take a matrix as input.
But as far as I can tell, geom_contour wants a data.frame with three
{x, y and z} coordin
> I was looking at this example which uses geom_contour():
>
> ggvolcano = volcano %>%
> reshape2::melt() %>%
> ggplot() +
> geom_tile(aes(x=Var1,y=Var2,fill=value)) +
> geom_contour(aes(x=Var1,y=Var2,z=value),color="black") +
> scale_x_continuous("X",expand = c(0,0)) +
> scale_y_continuous("
> Understood
I'd recommend you try to be more precise.
> I just began looking at the volcano dataset which uses geom_contour.
The volcano dataset does *not* use geom_contour.
However, the help file for the volcano dataset, does use the
filled.contour function, in its example.
> I now realize th
> But there's no reason for the user to do that when using the plotting
> function.
I should amend the above.
There's no reason for the user to do that (compute a third "variable"
representing density), if using a high level plotting function, that's
designed to compute the density for you.
It i
I'm not familiar with the gg graphics system.
However, I am familiar with density estimation, and density visualization.
There is *no* third variable, as such.
But rather, density estimates, which in this context, would usually be a matrix.
(And are computed inside the plotting or density estimati
I'm not familiar with these subjects.
And hopefully, someone who is, will offer some better suggestions.
But to get things started, maybe...
(1) What packages are you using (re: tdm)?
(2) Where does the problem happen, in dist, hclust, the plot method
for hclust, or in the package(s) you are using
> My question is how do I present/plot the effect of covariate "TD" in
> the example it has "P" equal to 3.32228e-12 for all IDs in the
> resulting file so that I show how much effect covariate "TD" has on
> the analysis. Should I run another regression without covariate "TD"
I'll take a second sh
I'm wondering if you want one of these:
(1) Plots of "Main Effects".
(2) "Partial Residual Plots".
Search for them, and you should be able to tell if they're what you want.
But a word of warning:
Many people (including many senior statisticians) misinterpret this
kind of information.
Because, it
> The absolute
> value of e grows as L grows, but by how much? It seems statistical
> theory claims it grow by an order of the square root of L.
Assuming you want the standard deviation for the number of successes,
given p=0.5:
#exact
0.5 * sqrt (n)
#numerical approximation
sd (rbinom (1e6, n,
Just re-read your question and realized I misread the error message.
The argument is of zero length.
But the conclusion is the same, either a bug in the package, or a
problem with your input.
On Fri, Aug 21, 2020 at 4:16 PM Abby Spurdle wrote:
>
> Note that I'm not familiar with t
Note that I'm not familiar with this package or the method.
Also note that you haven't told anyone what function you're using, or
what your call was.
I'm assuming that you're using the rotationForest() function.
According to its help page, the default is:
K = round(ncol(x)/3, 0)
There's no r
a big whopping U-turn.
Abby Spurdle wrote:
> There's a work around.
> You can redefine the print function, using something like:
> print = function (...) base::print (...)
Duncan Murdoch replied:
> That's a really, really bad idea. If there are two generics named the
> same,
> a) Read about it yourself. It is a legal definition.
Not quite.
Your statement implies some sort of universalism, which is unrealistic.
Legal definitions vary from one legal system to the next.
I'm not an expert in US company/corporate law.
But as I understand it, the applicable laws vary from
On Fri, Aug 14, 2020 at 12:11 PM Jeff Newmiller
wrote:
> It is a public benefit corporation
Seriously?
On Fri, Aug 14, 2020 at 12:11 PM Jeff Newmiller
wrote:
> used to introduce people to R
Correction, it introduces people to a modified version of R.
Hi Kevin,
Intuitively, the first step would be to ensure that all versions of R,
and all the R packages, are the same.
However, you mention HPC.
And the glmnet package imports the foreach package, which appears
(after a quick glance) to support multi-core and parallel computing.
If your code use
Hi,
Your example is not reproducible.
However, I suspect that the following is the problem:
c("red","green","blue","aquamarine","magenta")[MI_fish_all.mrt$where]
Here's my version:
where = c (3, 3, 8, 6, 6, 9, 5, 5, 9, 3, 8, 6, 9, 6, 5, 9, 5, 3, 8, 6,
9, 6, 5, 9, 5, 3, 3, 8, 6, 6, 9, 5, 5, 9, 6
> Sorry, Abby, I do disagree here ((strongly enough as to warrant
> this reply) :
Which part are you disagreeing with?
That unambiquous names/references should be used, or that there are
many R functions for GLMs.
The wording of your post, suggests (kind of), that there is only one R
function for
That's a bit harsh.
Isn't the best advice here, to post a reproducible example...
Which I believe has been mentioned.
Also, I'd strongly encourage people to use package+function name, for
this sort of thing.
stats::glm
As there are many R functions for GLMs...
On Sun, Aug 2, 2020 at 12:47
On Sat, Jul 25, 2020 at 12:40 AM Martin Maechler
wrote:
> Good answers to this question will depend very much on how many
> 'Machine' and 'Region' levels there are.
I second that.
And unless I missed something, the OP hasn't answered this question, as such.
But "10k+" combinations, does imply aro
On Sat, Jul 11, 2020 at 8:04 AM Fox, John wrote:
> We've had several solutions, and I was curious about their relative
> efficiency. Here's a test
Am I the only person on this mailing list who learnt to program with ASCII...?
In theory, the most ***efficient*** solution, is to get the
ASCII/UTF
Last line should use outside = c (0, 1).
But not that important.
On Sat, Jul 11, 2020 at 1:31 PM Abby Spurdle wrote:
>
> NOTE: LIMITED TESTING
> (You may want to check this carefully, if you're interested in using it).
>
> library (kubik)
> library (mvtnorm)
>
> sim
NOTE: LIMITED TESTING
(You may want to check this carefully, if you're interested in using it).
library (kubik)
library (mvtnorm)
sim.cdf <- function (mx, my, sdx, sdy, cor, ..., n=2e5)
sim.cdf.2 (mx, my, sdx^2, sdy^2, sdx * sdy * cor, n=n)
sim.cdf.2 <- function (mx, my, vx, vy, cov, ..., n=
shell ("Notepad", wait=FALSE)
On Mon, Jul 6, 2020 at 10:07 AM Sparks, John wrote:
>
> Hi R Helpers,
>
> I am trying to open another application from within R and then work with it.
>
> I can get the application to open, but R then hangs at that point (spinning
> blue circle in the middle of the
corr_weight
> 4.3 2.3 5800900.000
> 5.7 6.1 250 11.000.600
> .. .. .. ..
>
> > Op 22 jun. 2020, om 02:02 he
, then the expressions above can be replaced
with the union of multiple (sub)samples.
Then an estimate/inference (say correlation) can be computed from one
or more combined samples.
Sorry, for triple posting.
On Mon, Jun 22, 2020 at 10:00 AM Abby Spurdle wrote:
>
> Hi Frederick,
>
> I gl
Just realised the above notation may be a bit misleading.
Because I was thinking in terms of simulated data.
On Mon, Jun 22, 2020 at 10:00 AM Abby Spurdle wrote:
>
> Hi Frederick,
>
> I glanced at the webpage you've linked.
> (But only the top three snippets).
>
> Thi
Hi Frederick,
I glanced at the webpage you've linked.
(But only the top three snippets).
This is what I would call the sum of random variables.
(X, Y) = (X1, X1) + (X2, Y2) + ... + (Xn, Yn)
The example makes the mistake of assuming that the Xs are normally
distributed, and each of the Ys are fro
If I understand your question correctly, you're already able to read
an EPS file.
So, essentially, you have an answer to your question.
Paul Murrell published an article on using raster graphics, in 2011.
https://journal.r-project.org/archive/2011-1/RJournal_2011-1_Murrell.pdf
I would assume ther
I'm not familiar with the mnormt package.
I'm guessing its documentation may answer some (if not all) of your questions.
Note that my package, bivariate, wraps the dmvnorm function, from the
mvtnorm package.
library (bivariate)
f <- nbvpdf (
0, 0, #means X, Y
1, 1, #sds X, Y
0.5) #c
> solving Linear Programming Problems with O(L^1.5)
> computational complexity
I'm not an expert on this topic.
However, a quick glance at the topic suggests that these sorts of
algorithms are usually exponential in "n", here the number of
variables/dimensions.
Apparently, "L" is the number of inp
(excerpts only)
> Tried this new version but did not execute...
> Error in plot_ds(bat_call, "plot 2", c(25, 28), c(-15, 10), k1 = 1.25, :
> object 'bat_call' not found
I've used the bat_call object, from Jim's earlier post.
__
R-help@r-project.org m
> The contour lines are actually useful to see groupings.
> However w/o a legend for density it is not possible to see what is
> presented.
I need to re-iterate, that the diagonal lines, may be important.
Also, I'm not sure I see the point in adding density values.
Unless people have a good knowl
> that extraneous white lines in PDFs are the fault of the PDF
> viewing program rather than of R.
Except it's a PNG file.
I've tried to minimize artifacts viewing PDF files.
But assumed (falsely?) that PNGs and other raster formats, would be fine.
__
> Very nice
Jim, thank you.
However, the (deterministic, or near-deterministic) diagonal lines in
the plot, make me question the suitability of this approach.
In my plot, the contour lines could be removed, and brighter colors
could be used.
But perhaps, a better approach would be to model those
I'm putting this back on the list.
> So how would I set up the code to do this with the data type I have?
> I will need to replicate the same task > 200 times with other data sets.
> What I need to do is plot *Fc *against *Sc* with the third dimension being
> the *density* of the data points.
U
Hi,
I'm probably biased.
But my package, bivariate, contains a wrapper for KernSmooth::bkde2D,
which can produce both 3D surface plots and (pretty) contour plots of
bivariate kernel density estimates, conveniently.
https://cran.r-project.org/web/packages/bivariate/vignettes/bivariate.pdf
(pages
This sounds like a homework question...
But... numerical linear algebra rocks...
cbind (diag (1:3), 4:6)
On Sat, May 23, 2020 at 9:46 PM Vahid Borji wrote:
>
> Hi my friends,
>
> I want to make the below matrix in r:
>
> 1 0 0 4
>
> 0 2 0 5
>
> 0 0 3 6
>
> I used the below code:
>
> matrix(
> My book is
> Statistical Analysis and Data Display, Richard M. Heiberger, Burt
> Holland, 2nd ed. 2015
In all fairness, I thought should look at your book.
I was quite impressed by the chapter on multiple comparisons.
And may look again, later.
In my personal opinion (diverging slightly), with
> The Excel file is what you need.
Well, now I'm in a bad mood.
I went to all the trouble of opening the thing...
And the first two Springer-published books I look for, aren't there.
(1) Programming with Data, John Chambers
(2) Applied Econometrics with R, Z and co.
Next time someone tells me t
1 - 100 of 229 matches
Mail list logo