Dear All
I am kind of stuck up with a code a part of which seems to be causing a
problem, or at least I think so. May be the community can help me. Its
simple but I suppose I am missing something.
I generate a data matrix X, say of order n*p, where n represents
independent row-vectors and p corr
fun1<- function(maxN,p1L,p1H,p0L,p0H,c11,c12,c1,c2,beta,alpha){
d<-do.call(rbind,lapply(2:(maxN-6),function(m1)
do.call(rbind,lapply(2:(maxN-m1-4),function(n1)
do.call(rbind,lapply(0:m1,function(x1)
do.call(rbind,lapply(0:n1,function(y1)
data.frame(m1,n1,x1,y1)
colnames(d)<-c("m1","n1","x
Sara,
On 11 March 2013 18:26, cyl123 <505186...@qq.com> wrote:
> I have some quesions about about ARIMA and FARIMA:
Looks like they're all answered in the PDF for fArma[1].
--
Sent from my mobile device
Envoyait de mon portable
1. http://cran.r-project.org/web/packages/fArma/fArma.pdf
Dear Mr Hasselman,
for a better understanding I have attached an example solved in excel by
using the tool Solver.
I want to assign for each municipality one of the centres and apply it for
calculating the minimum cost as you can see in an example.
I used package lpsolve, but it does not work. I a
Dear r-users,
Originally my data is in notepad. I use data.frame(...) to convert the data
into columns but it has no header. The data below is what I got in R. I would
like to extract these data:
19710629 39.3 19701126
19720629 33.8 19720517
...
when I use data2_30min[1,
Pavel_K vsb.cz> writes:
>
> Dear all,
> I am trying to find the solution for the optimization problem focused on
> the finding minimum cost.
> I used the solution proposed by excel solver, but there is a restriction
> in the number of variables.
>
> My data consists of 300 rows represent cities
What about read.table()?
--
Ivan CALANDRA
Université de Bourgogne
UMR CNRS/uB 6282 Biogéosciences
6 Boulevard Gabriel
21000 Dijon, FRANCE
+33(0)3.80.39.63.06
ivan.calan...@u-bourgogne.fr
http://biogeosciences.u-bourgogne.fr/calandra
Le 12/03/13 10:03, Roslina Zakaria a écrit :
Dear r-users,
On 12-03-2013, at 08:45, Pavel_K wrote:
> Dear Mr Hasselman,
> for a better understanding I have attached an example solved in excel by
> using the tool Solver.
>
> I want to assign for each municipality one of the centres and apply it for
> calculating the minimum cost as you can see in an exa
Hello,
Like Ivan said, use read.table to read in the data and then, to select
only some of the columns you can use several ways.
data_30min <- read.table(text = "
1 19710629 08(PARTIAL) 39.3 at interval beginning 19701126 010326
2 19720629 08(PARTIAL) 33.8 at interval be
Hi everybody,
I want to test differences in means among independent groups (more that
two). The assumptions of normality and homocedasticity are violated, but I
don´t want to use rank tests like Kruskal-Wallis, because I lose too much
information under this rank transformation. Can I use
oneway_te
Hi,
My apologies for the naive question!
I have three overlapping sets and I want to find the probability of finding
a larger/greater intersection for 'A intersect B intersect C' (in the
example below, I want to find the probability of finding more than 135
elements that are common in sets A, B &
>
> I'm no expeRt, but suppose that we change the setup slightly:
>
> xx <- x[sample(nrow(x)), ]
>
> Now what would you like
>
> aggregate(value ~ group + year, data=xx, FUN=function(z) z[1])
>
> to return?
>
> Personally, I prefer to have R return the same thing regardless
> of how the input da
As this seems to be a statistics, not an R, question, it is off topic
here. Post on a statistics list like stats.stackexchange.com instead.
-- Bert
On Tue, Mar 12, 2013 at 6:22 AM, Brian Smith wrote:
> Hi,
>
> My apologies for the naive question!
>
> I have three overlapping sets and I want to f
Dear r-help list members,
I thought that many of you might be interested in the following announcement
from the data archive of the Inter-University Consortium for Political and
Social Research (a world-wide organization of nearly 800 universities and
other institutions headquartered at the Univer
Hello everybody
I have the following problem. I have to load a number of xls files from
different folders (each xls file has the same number of columns, and
different numbers of rows). Each xls file is named with a number, i.e.
12345.xls and is contained in a folder with same name, say 12345)
Onc
You can use the lapply or rapply functions on the resulting list to break
each piece into a list itself, then apply the lapply or rapply function to
those resulting lists, ...
On Mon, Mar 11, 2013 at 3:41 PM, Not To Miss wrote:
> Thanks. That's just an simple example - what if there are more co
hello all,
I'm overlaying numerous scatter plots onto a map (done in PBSmodelling). In
this case I'm placing each plot by setting par(fig) to the centroid of map
polygons. The location/mapping part is not so important. There are cases of
small overlaps for some plots (ie figures) so I'm keen to
Hi Mario!
I'm not really familiar with this kind of manipulations, but I think you
can do it more or less like this (some people on this list might give a
more detailed answer):
#Create an empty named list
list_df <- vector(mode="list", length=length(lista_rec_c))
names(list_df) <- lista_rea_
I like to bootstrap regression models, saving the entire set of bootstrapped
regression coefficients for later use so that I can get confidence limits
for a whole set of contrasts derived from the coefficients. I'm finding
that ordinary bootstrap percentile confidence limits can provide poor
cover
Dear Frank,
I'm not sure that it will help, but you might look at the bootSem() function
in the sem package, which creates objects that inherit from "boot". Here's
an artificial example:
-- snip --
library(sem)
for (x in names(CNES)) CNES[[x]] <- as.numeric(CNES[[x]])
model.cnes
Hello,
I'm currently using the segmented package of M.R. Muggeo to fit a
two-slope segmented regression. I would like to constrain a
null-left-slope, but I cannot make it. I followed the explanations of
the package (http://dssm.unipa.it/vmuggeo/segmentedRnews.pdf) to write
the following code :
Dear all,
I'm trying to fit a soap film smoother by mgcv, accounting for a polygonal
boundary. Everything works well untill I plot the results, when I get the
right shape, but plot coordinates are exchanged. Function vis.gam() doesn't
run into the same problem. Here I enclose the script and attach
Hey,
I'm trying to implement a GARCH model with Johnson-Su innovations in order to
simulate returns of financial asset. The model should look like this:
r_t = alpha + lambda*sqrt(h_t) + sqrt(h_t)*epsilon_t
h_t = alpha0 + alpha1*epsilon_(t-1)^2 + beta1 * h_(t-1).
Alpha refers to a risk-free retu
This example dataset breaks the kmeans in version 2.15.2, installed from
the Belgian CRAN, on an Ubuntu 12.04 LTS 64bit
> my.sample.2
Day1 Day2 Day3 Day4 Day5 Day6
[1,]455355
[2,]776566
[3,]665555
[4,]534
Thank you. This does what I want (apparently by coercing data.frame to list
with no other structure).
--
View this message in context:
http://r.789695.n4.nabble.com/Remove-row-names-column-in-dataframe-tp856647p4661049.html
Sent from the R help mailing list archive at Nabble.com.
_
take me off the list please
--
Pat Jackson
Graduate Student
Utah State University
Wildland Resources Department
5230 Old Main Hill
Logan, UT 84322-5230
pat.jack...@aggiemail.usu.edu
816-716-6924
[[alternative HTML version deleted]]
__
R-help@r
Thanks. Is there any more elegant solution? What if I don't know how many
levels of nesting ahead of time?
On Tue, Mar 12, 2013 at 8:51 AM, Greg Snow <538...@gmail.com> wrote:
> You can use the lapply or rapply functions on the resulting list to break
> each piece into a list itself, then apply
Please take yourself off the list at the URL given in the footer:
https://stat.ethz.ch/mailman/listinfo/r-help
On Tue, Mar 12, 2013 at 12:17 PM, Patrick Jackson
wrote:
> take me off the list please
>
> --
>
--
Sarah Goslee
http://www.functionaldiversity.org
___
Dear useRs,
Some time ago I queried the list as to an efficient way of building a function
which acts as ls() but with a different default for all.names:
http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html
I have struck upon a solution which so far has performed admirably. In
particular
Dear useRs,
I have a matrix which contains population data at 12 points of every station.
In total there are 179 stations. So, its a matrix of 179 columns and 12 rows.
Through a self inventory distance calculation method, a distance matrix was
prepared for this data. Now i want to do k-neares
On Tue, Mar 12, 2013 at 12:59 PM, Szumiloski, John
wrote:
> Dear useRs,
>
> Some time ago I queried the list as to an efficient way of building a
> function which acts as ls() but with a different default for all.names:
>
> http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html
>
> I have stru
On Tue, Mar 12, 2013 at 12:59 PM, Szumiloski, John
wrote:
> Dear useRs,
>
> Some time ago I queried the list as to an efficient way of building a
> function which acts as ls() but with a different default for all.names:
>
> http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html
>
> I have stru
-Original Message-
From: Hadley Wickham [mailto:h.wick...@gmail.com]
Sent: Tuesday, 12 March, 2013 1:34 PM
To: Szumiloski, John
Cc: r-help@r-project.org
Subject: Re: [R] ls() with different defaults: Solution;
On Tue, Mar 12, 2013 at 12:59 PM, Szumiloski, John
wrote:
> Dear useRs,
>
>
On 2013-03-12 05:29, Sahana Srinivasan wrote:
Hi everyone, I am having trouble understanding where I went wrong with my
code. It seems to logically be "all there" but the output says otherwise.
I know this is a bit long but I can't seem to find the errors so I would
appreciate your help :)
This
Hello all!
I have a problem to extract values greater that for example 1820.
I try this code: x[x[,1]>1820,]->x1
Please help me!
Thank you!
The data structure is:
structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425,
1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074,
1.618, 1
I guess you are referring to names()
vec1# dataset
vec1[names(vec1)>1820]
head(vec1[names(vec1)>1820])
# 1821 1822 1823 1824 1825 1826
#1.4250 1.0990 1.0070 1.1795 1.3855 1.4065
A.K.
- Original Message -
From: catalin roibu
To: r-help@r-project.org
Cc:
Sent: Tuesday, M
First take a look at your data and understand its structure. What you have
is a named vector and not a matrix:
> str(x)
Named num [1:213] 2.58 1.73 3.43 2.19 1.93 ...
- attr(*, "names")= chr [1:213] "1799" "1800" "1801" "1802" ...
> x1 <- x[names(x) > '1820']
> x1
1821 1822 1823 1824
Look at your data. You do not want values greater than 1820. There are none.
You want values with NAMES greater than 1820.
> x1 <- x[as.numeric(names(x)) >1820]
> x1
--
David L Carlson
Associate Professor of Anthropology
Texas A&M University
College Sta
Data in x is never > 1820:
> summary(x)
Min. 1st Qu. MedianMean 3rd Qu.Max.
0.8465 1.2890 1.5660 1.5710 1.8050 3.4340
And your object is a vector: trying to extract the first column with
x[,1] is meaningless, because x has no dimensions.
> dim(x)
NULL
It looks to me as if you wa
On 2013-03-12 03:55, Pierre Hainaut wrote:
Hello,
I'm currently using the segmented package of M.R. Muggeo to fit a
two-slope segmented regression. I would like to constrain a
null-left-slope, but I cannot make it. I followed the explanations of
the package (http://dssm.unipa.it/vmuggeo/segmente
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Szumiloski, John
> Sent: Tuesday, March 12, 2013 10:39 AM
> To: Hadley Wickham
> Cc: r-help@r-project.org
> Subject: Re: [R] ls() with different defaults: Solution;
>
>
>
> ---
Thank very much for your help!
On 12 March 2013 20:04, Sarah Goslee wrote:
> Data in x is never > 1820:
> > summary(x)
>Min. 1st Qu. MedianMean 3rd Qu.Max.
> 0.8465 1.2890 1.5660 1.5710 1.8050 3.4340
>
> And your object is a vector: trying to extract the first column with
> x
That's very helpful John. Did you happen to run a test to make sure that
boot.ci(..., type='bca') in fact gives the BCa intervals or that they at
least disagree with percentile intervals?
Frank
John Fox wrote
> Dear Frank,
>
> I'm not sure that it will help, but you might look at the bootSem()
Dear Frank,
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Frank Harrell
> Sent: Tuesday, March 12, 2013 2:24 PM
> To: r-help@r-project.org
> Subject: Re: [R] Bootstrap BCa confidence limits with your own
> resamples
>
> Tha
Hello,
I don't understand. Your object is a vector, with a dim attribute, so
x[, 1] is meaningless. And all the values are less than 1820:
any(x > 1820) # FALSE
Given the names attribute of your data, is it the values of x
corresponding to names greater than 1820? If so, try
y <- as.numer
Hi everyone.
I'm trying to create a graph where I could plot some lines on the right side.
Here an example:
layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1))
x = 1:100y = rnorm(x)+xplot(x,y)
reg = lm(y~x)abline(reg, col = "red")
plot(1, type="n", axes=F, xlab="", ylab="", x
On 2013-03-12 08:04, Folkes, Michael wrote:
hello all,
I'm overlaying numerous scatter plots onto a map (done in PBSmodelling). In
this case I'm placing each plot by setting par(fig) to the centroid of map
polygons. The location/mapping part is not so important. There are cases of
small overla
Your code appears to be a load of dingos' kidneys, but in general when
troubleshooting one can do worse than be guided by
fortune("magnitude and direction").
cheers,
Rolf Turner
On 03/13/2013 01:29 AM, Sahana Srinivasan wrote:
Hi everyone, I am having trouble understanding where I
Thanks very much Peter,
That may be a good start. Though I am aiming for automation and would
prefer to avoid the locator function.
I'll take a look...
Michael
-Original Message-
From: Peter Ehlers [mailto:ehl...@ucalgary.ca]
Sent: March 12, 2013 12:27 PM
To: Folkes, Michael
Cc: r-help@r
Hi,
You posted in HTML by mistake, so your code was mangled:
> I'm trying to create a graph where I could plot some lines on the right side.
> Here an example:
> layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1))
> x = 1:100y = rnorm(x)+xplot(x,y)
> reg = lm(y~x)abline(reg
Hi Michael:
I am not totally certain what you are trying to do, but Hadley Wickham and a
student have been working on a package that allows a variety of plots to be
added to maps. Examples can be found in this paper in Environmetrics :
> Glyph-maps for visually exploring temporal patterns in
Hi and thank you for your answer.
Sorry for the html post, here's the code: (you missed a break line between +x
and plot(...)
layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1))
x = 1:100
y = rnorm(x)+x
plot(x,y)
reg = lm(y~x)
abline(reg, col = "red")
plot(1, typ
I'm very grateful!
Thanks Roy.
-Original Message-
From: Roy Mendelssohn - NOAA Federal [mailto:roy.mendelss...@noaa.gov]
Sent: March 12, 2013 12:49 PM
To: Folkes, Michael
Cc: r-help@r-project.org
Subject: Re: [R] funtion equivalent of jitter to move figures on device
Hi Michael:
I am
It's much easier to use xts' time-of-day subsetting:
library(xts)
dat1 <- as.xts(read.zoo(text="TIME, Value1, Value2
01.08.2011 02:30:00, 4.4, 4.7
01.09.2011 03:00:00, 4.2, 4.3
01.11.2011 01:00:00, 3.5, 4.3
01.12.2011 01:40:00, 3.4, 4.5
01.01.2012 02:00:00, 4.8, 5.3
01.02.2012 02:30:00, 4.9, 5.2
0
Dear useRs,
I have some trouble with the calculation of Cook's distance in R.
The formula for Cook's distance can be found for example here:
http://en.wikipedia.org/wiki/Cook%27s_distance
I tried to apply it in R:
> y <- (1:400)^2
> x <- 1:100
> lm(y~x) -> linmod # just for the sake of a simple
Dear list members,
I am trying to fit a natural cubic spline to my dataset using the ns
function in the splines package.
Specifically, I do:
library(splines)
lm(y ~ ns(x, df=3), data =data)
How do I extract the values of the interior knots of the fitted spline ?
Thanks,
Rajat
Okay, so what you really want to do is be able to set a wide right
margin and draw some segments there? Using layout() is not the best
way to go about this: as you've discovered, you can't control the area
assigned.
You can "cheat" with layout(), as in:
layout(matrix(c(1,1,1,2), nrow=1))
but the
On Mar 12, 2013, at 9:37 AM, Not To Miss wrote:
> Thanks. Is there any more elegant solution? What if I don't know how many
> levels of nesting ahead of time?
It's even worse than what you now offer as a potential complication. You did
not provide an example of a data object that would illustra
xpd=TRUE might works well.
I'll give it a try.
Thank you for your assistance,
Phil
> Date: Tue, 12 Mar 2013 16:07:05 -0400
> Subject: Re: [R] Fine control of plot
> From: sarah.gos...@gmail.com
> To: pmassico...@hotmail.com
> CC: r-help@r-project.org
>
> O
On 2013-03-12 12:28, Rolf Turner wrote:
Your code appears to be a load of dingos' kidneys, but in general when
troubleshooting one can do worse than be guided by
fortune("magnitude and direction").
cheers,
Rolf Turner
Cool. I didn't know dingos had kidneys. I thought they jus
Dear Koilos,
You've neglected to correct the MSE for df. Modifying your example, so that
it actually runs (your original regression doesn't work -- the lengths of x
and y differ):
> y <- (1:100)^2
> x <- 1:100
> lm(y~x) -> linmod # just for the sake of a simple example
>
linmod$residuals[1]^2/(2*
On Mar 12, 2013, at 2:59 PM, Rajat Tayal wrote:
> Dear list members,
>
> I am trying to fit a natural cubic spline to my dataset using the ns
> function in the splines package.
> Specifically, I do:
>
> library(splines)
> lm(y ~ ns(x, df=3), data =data)
>
> How do I extract the values of the
The only real improveent I can see over Ivan's solution is to use lapply
instead of the loop (this may just be person preference though).
Something like:
list_df <- lapply( lista_rea_c, function(x) read.xls( file=
paste0(path,x,"/",x,".xls"),1,header=TRUE,as.data.frame=TRUE))
my_df <- do.call(rbi
Dear R colleagues,
Is there any code I can use that allows an R 64bits to automatically run
code on an adjoining 32bits version?
Specifically, I am using the "RODBC" package on a 64 bits version and
have a separate installation of R 32bits. I would like to automate data
extraction from Micro
Hi -
levelplot (Package lattice) assumes, that the used Matrix is more or
less quadratic. But if the Matrix is e.g. 5x400 you get only a bar
require(lattice)
# create a nice matrix
dat <- as.data.frame(matrix(runif(2000),ncol=5))
dat$V1 <- c(sin(1:400/80))
dat$V2 <- c(sin(1:400/75))
dat$V3 <-
I'm very pleased to announce the release of reports: An R package to assist in
the workflow of writing academic articles and other reports.
This is a bug fix release of reports:
http://cran.r-project.org/web/packages/reports/index.html
The reports package assists in writing reports and prese
On Mar 12, 2013, at 3:18 PM, Nuno Prista wrote:
> Dear R colleagues,
>
> Is there any code I can use that allows an R 64bits to automatically run code
> on an adjoining 32bits version?
>
> Specifically, I am using the "RODBC" package on a 64 bits version and have a
> separate installation of
In addition to the other suggestions that you have received you may want to
look at the spread.labs function in the TeachingDemos package and the
spread.labels function in the plotrix package. These only spread in 1
dimension, but may be able to do what you want. The thigmaphobe.labels
function i
On 2013-03-12 13:27, Dieter Wirz wrote:
Hi -
levelplot (Package lattice) assumes, that the used Matrix is more or
less quadratic. But if the Matrix is e.g. 5x400 you get only a bar
require(lattice)
# create a nice matrix
dat <- as.data.frame(matrix(runif(2000),ncol=5))
dat$V1 <- c(sin(1:400/
Thanks very much Greg. 'fail miserably' - sounds like what I was
expecting. I appreciate the ideas and hadn't considered any of them. I
was looking to draw a line between centroid of one polygon and centroid
of the common area with overlapping polygon then move the polygon
further along the axis of
I have huge list of edges with weights.
a1 b1 w1
a2 b2 w2
a3 b3 w3
a1 b1 w4
a3 b1 w5
I have to convert it into 2 dim matrix
b1b2 b3
a1 max(w1,w4) 0 0
a20w2 0
a3w5 0 w3
if edges repeated take the maxim
I have a 3 column dataset x,y,z, and I plotted a 3d scatter plot using:
cols <- myColorRamp(c(topo.colors(10)),z)
plot3d(x=x, y=y, z=z, col=cols)
I wanted to add a legend to the 3d plot showing the color ramp. Any help
will be greatly appreciated!
thanks,
Z
[[alternative HTML version de
Hello,
The following is a bit convoluted but will do it.
dat <- read.table(text = "
a1 b1 1
a2 b2 2
a3 b3 3
a1 b1 4
a3 b1 5
")
xtabs(V3 ~ V1 + V2, data = aggregate(V3 ~ V1 + V2, data = dat, FUN = max))
Hope this helps,
Rui Barradas
Em 12-03-2013 21:45, avinash sahu escreveu:
I have huge l
Dear Rxperts,
I am aware of Sweave that generates reports into a pdf, but do know of any
tools to generate to export to a MS Word document...
Is there a way to use R to generate and export report/publication quality
tables and figures and export them to MS word (for reporting purposes)?
Thanks s
Hello,
I have a challenge!
I have a large dataset with three columns, "date","temp", "location".
"date" is in the format %m/%d/%y %H:%M, with a "temp" recorded every 10
minutes. These temperatures of surface temperatures and so fluctuate during
the day, heating up and then cooling down, so the da
Hi,
You could also do:
library(reshape2)
dcast(dat,V1~V2,value.var="V3",max,fill=0)
# V1 b1 b2 b3
#1 a1 4 0 0
#2 a2 0 2 0
#3 a3 5 0 3
A.K.
- Original Message -
From: Rui Barradas
To: avinash sahu
Cc: r-help@r-project.org
Sent: Tuesday, March 12, 2013 7:10 PM
Subject: R
probably not the answer you're looking for but i only use LaTeX.
On Mar 12, 2013, at 8:02 PM, Santosh wrote:
> Dear Rxperts,
> I am aware of Sweave that generates reports into a pdf, but do know of any
> tools to generate to export to a MS Word document...
>
> Is there a way to use R to ge
knitr markdown+pandoc gives serviceable results, for low enough expectations
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
you are describing SWord, distributed at rcom.univie.ac.at
Sent from my iPhone
On Mar 12, 2013, at 20:02, Santosh wrote:
> Dear Rxperts,
> I am aware of Sweave that generates reports into a pdf, but do know of any
> tools to generate to export to a MS Word document...
>
> Is there a way to us
|Hello,
Given a function with several arguments, I would like to perform an
lapply (or equivalent) while holding one or more arguments fixed to some
common value, and I would like to do it in as elegant a fashion as
possible, without resorting to wrapping a separate wrapper for the
function if pos
Apologies; resending in plain text...
Given a function with several arguments, I would like to perform an
lapply (or equivalent) while holding one or more arguments fixed to some
common value, and I would like to do it in as elegant a fashion as
possible, without resorting to wrapping a separate w
On 12 Mar 2013, at 20:45, "Peter Ehlers"
mailto:ehl...@ucalgary.ca>> wrote:
Cool. I didn't know dingos had kidneys. I thought they just ate kids.
That would be the Australian ones, at least allegedly.
The canonical UK reference to the cited anatomical features is Adams (1979).
S Ellison
Re
Hi,
I am graphing with the following command in the Lattice and LatticeExtra
package
xyplot(xts,lty=c(1,2),col=c("blue","red"),type=c("l","g"),par.settings =
list(layout.heights = list(panel = c(2, 2))), aspect="xy",xlab="",ylab="%",
key=key1,screen=list(a,a,b,b,c,c,d,d), layout=c(2,2),
scales
Hi all:
Is there a plot tool to use different color indicates difference magnitude of
data?
The plot is in the attachment.
Many thanks.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
I am new to R and start to play around naive bayes algorithm. But I run into a
problem which I could not figure out why: for the following sample data, the
predict function (using a naive bayes model) always gives me zero length. Do
you have any hints? Thanks
Here is the code:
> titanic_sma
Hi everybody,
I'm trying to create a numerical data frame on which to perform PRCC.
So far I have created a data frame that consists of function/vector
output that displays in numerical form, but when I try and run PRCC
(from epiR package) I get the following error message:
"Error in solve.defau
Hi,
The attachment has been deleted. Please be more specific.
Regards,
Pascal
On 13/03/13 10:20, meng wrote:
Hi all:
Is there a plot tool to use different color indicates difference magnitude of
data?
The plot is in the attachment.
Many thanks.
__
I am not sure if I should ask this question in this list. But I'll try.
Currently I am trying to analyze images using EBImage and biOps.
One of the features that I need to extract from various images is the color
spectrum, namely, which colors each image consists of.
So, each image hopefully can
Hello R Users,
I have ADCP (Acoustic Doppler Current Profiler) data measurements for a
river and I want to process these data using R. Is there a R package to
handle ADCP data ? Any suggestions are highly appreciated.
Thanks.
Janesh
[[alternative HTML version deleted]]
___
Hi,
I am doing a project on authorship attribution, where my term document
matrix has around 10450 features.
Can you please suggest me a package where I can find the feature selection
function to reduce the dimensions.
Regards,
Venkata Satish Basva
[[alternative HTML version deleted]]
__
90 matches
Mail list logo