I am using \Sexpr to include a variable in a title of a Sweave document:
\documentclass[a4paper]{article}
<>=
#mytitlevar <- "Stuff" # case 1, everything is find
mytitlevar <- "Stuff_first" # case 2, f is turned into sub-text
@
\title{MyTitle: \\ \Sexpr{mytitlevar} }
\begin{document}
\maketitle
\e
I have a code junk that produces a figure. In my special case,
however, data does not always exist. In cases where data exists, the
code chunk is of course trival (case #1), however, what do I do for
case # 2 where the data does not exist?
I can obviously prevent the code from being executed by che
Thank you for all your comment. In result of own research I found this
method that seems to do what I want in addition to your suggestions:
tools::texi2dvi("myfile.tex", pdf=TRUE)
Thanks again,
Ralf
On Mon, Nov 15, 2010 at 6:42 AM, Duncan Murdoch
wrote:
> On 15/11/2010 6:22 AM, Dieter Menne wro
I am looking for a way to determine the full filepath to the currently
executed script. Any ideas?
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-g
make external calls.
Ralf
On Sat, Nov 13, 2010 at 4:29 PM, Johannes Huesing wrote:
> Ralf B [Sat, Nov 13, 2010 at 10:03:49PM CET]:
>> It seems that Sweave is supposed to be used from Latex and R is called
>> during the LaTeX compilation process whenever R chunks appear.
>
&g
It seems that Sweave is supposed to be used from Latex and R is called
during the LaTeX compilation process whenever R chunks appear. What
about the other way round? I would like to run it triggered by R. Is
this possible? I understand that this does not correspond to the idea
of literate programmi
type of file you want). I usually use Ghostscript for
> tinkering with already created postscript or PDF files. To me there
> is more appropriate software than R to use if you want to
> edit/merge/manipulate postscript or PDF files.
>
> Cheers,
>
> Josh
>
> On Thu, Nov
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
>
>
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
>> project.org] On Behalf Of Ralf B
>> Sent: Friday, November 12, 2010 12
I created multiple postscript files using ?postscript. How can I merge
them into a single postscript file using R? How can I merge them into
a single pdf file?
Thanks a lot,
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo
I have this script which I use to get an epoch with accuracy of 1
second (based on R's inability to calculate millisecond-accurate
timestamps -- at least I have not seen a straightforward solution :)
):
nowInSeconds <- as.numeric(Sys.time())
nowInMS <- nowInSeconds * 1000
print(nowInSeconds)
print
The Rserve documentation at
http://rosuda.org/Rserve/doc.shtml#start
states that even when making multiple connections to the Rserve,
Windows won't separate workspaces physically and share environments,
which will obviously cause problems and should therefore not be used.
Are there any alternativ
Hi all,
I tried to run Rserve: I installed it from CRAN using
install.packages("Rserve")
and tried to run it from the command line using:
R CMD Rserve
I am getting an error telling me that the command perl cannot be
found. What is wrong and what can I do to fix this? Do I need to
install any o
236), and possibly fortune(181).
>
> What is your ultimate goal? Maybe we can help you find a better way.
>
> --
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
>
>
>> -Original Messag
Can one create a variable through a function by name
createVariable <- function(name) {
outputVariable = name
name <- NULL
}
after calling
createVariable("myVar")
I would like to have a variable myVar initialized with NULL in my
environment. Is this possible?
Ralf
Here the general (perhaps silly question) first: Is it possible for a
script to find out if it was sourced by another script or run
directly?
Here a small example with two scripts:
# script A
print ("This is script A")
# script B
source("C:/scriptA.R")
print ("This is script B")
I would like to
Here the modified script with what I learned from Joshua:
#
# superscript
#
output <- NULL
writeOutput <- function() {
processTime <- proc.time()
outputFilename <- paste("C:/myOutput_", processTime[3], ".csv", sep="")
write.csv(output, file = outputFilename)
}
on.exit(wr
I think base:on.exit() will do the trick. Thank you :)
Ralf
On Wed, Oct 6, 2010 at 11:24 AM, Ralf B wrote:
>> If you are running these interactively, you could make your own
>> source() function. In that function you could define the super and
>> subscripts, and have i
> If you are running these interactively, you could make your own
> source() function. In that function you could define the super and
> subscripts, and have it call writeOutput on.exit(). I suspect you
> could get something like that to work even in batch mode by having R
> load the function by
Hi all,
in order to add certain standard functionality to a large set of
scripts that I maintain, I developed a superscript that I manually
include into every script at the beginning. Here an example of a very
simplified super and subscript.
#
# superscript
#
output <- NULL
writeOutput <- funct
Hi all,
I tried to install the rimage in order to get to the function
?read.jpeg. However, I get this error, independent what mirror I
choose:
install.packages("rimage")
--- Please select a CRAN mirror for use in this session ---
Warning message:
In getDependencies(pkgs, dependencies, available,
alues for each the same will
> make this process easier. Let me know if you'd like more details.
>
> HTH
>
> Peter Alspach
>
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
>> project.org] On Behalf Of Ral
Hi,
this following code:
x<-c(1,2,NA)
length(x)
returns 3, correctly counting numbers as well as NA's. How can I
exclude NA's from this count?
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the p
Hi group,
I am currently plotting two densities using the following code:
x1 <- c(1,2,1,3,5,6,6,7,7,8)
x2 <- c(1,2,1,3,5,6,5,7)
plot(density(x1, na.rm = TRUE))
polygon(density(x2, na.rm = TRUE), border="blue")
However, I would like to avoid bordering the second density as it adds
a nasty bottom
Hi group,
I am creating two density plots as shown in the code below:
x1 <- c(1,4,5,3,2,3,4,5,6,5,4,3,2,1,1,1,2,3)
x2 <- c(1,4,5,3,5,7,4,5,6,1,1,1,2,1,1,1,2,3)
plot(density(x1, na.rm = TRUE))
polygon(density(x2, na.rm = TRUE), border="blue")
How can I determine the area that is covered between t
Hi group,
I would like to draw multiple Lorenz curves in a single plot using
data already prepared. Here is a simple example:
require("lawstat")
lorenz.curve(c(1,2,3),c(4,5,4))
lorenz.curve(c(1,2,3),c(4,2,1))
This example draws two separate graphs. How can I combine them in a
distinguishable way
Dear R users,
I would like to great a frequency table from raw data and then access
the classes/bins and
their respective frequencies separately. Here the code to create the
frequency tables:
x1 <- c(1,5,1,1,2,2,3,4,5,3,2,3,6,4,3,8)
t1 <- table(x1)
print(t1[1])
Its easy to plot this, but how do
Hi,
in order to save space for a publication, it would be nice to have a
combined scatter and density plot similar to what is shows on
http://addictedtor.free.fr/graphiques/RGraphGallery.php?graph=78
I wonder if anybody perhaps has already developed code for this and is
willing to share. This is
Hi R group,
I am wondering if there is any implementation of the
Baumgartner-Weiss-Schindler test in R, as described in:
http://www.jstor.org/stable/2533862
It is a non-parametric test, that works similar to KS and others
testing the null hypothesis that two sets of data originate from the
the s
Hi David,
I would like to apologize for what I wrote earlier. It was late and I
was frustrated. Please give me time to adapt to the formal structures
of the forum.
Best,
Ralf
On Thu, Aug 5, 2010 at 7:32 AM, David Winsemius wrote:
>
> On Aug 5, 2010, at 4:10 AM, Ralf B wrote:
>
&
Thank you for such a careful and thorough analysis of the problem and
your comparison with your configuration. I very much appreciate.
For completeness and (perhaps) further comparison, I have executed
'version' and sessionInfo() as well:
> version
_
platform i386-pc-mingw32
attention.
On Wed, Aug 4, 2010 at 6:16 PM, David Winsemius wrote:
>
> On Aug 4, 2010, at 5:49 PM, Ralf B wrote:
>
>> Hi R Users,
>>
>> I have two vectors, x and y, of equal length representing two types of
>> data from two studies. I would like to test if they
I am dealing with very large data frames, artificially created with
the following code, that are combined using rbind.
a <- rnorm(500)
b <- rnorm(500)
c <- rnorm(500)
d <- rnorm(500)
first <- data.frame(one=a, two=b, three=c, four=d)
second <- data.frame(one=d, two=c, three=b, fou
.
>
> -- Bert
>
> On Wed, Aug 4, 2010 at 10:10 AM, Ralf B wrote:
>>> In general, the lapply(split(...)) construction should never be used.
>>
>> Why? What makes it so bad to use?
>>
>
__
R-help@r-project.org mai
Hi R Users,
I need to produce a simple report consisting of some graphs and a
statistic. Here simplification of it:
# graphics output test
a <- c(1,3,2,1,4)
b <- c(2,1,1,1,2)
c <- c(4,7,2,4,5)
d <- rnorm(500)
e <- rnorm(600)
op <- par(mfrow=c(3,2))
pie(a)
pie(b)
pie(c)
text(ks.test(d,e))
obvious
Hi R Users,
I have two vectors, x and y, of equal length representing two types of
data from two studies. I would like to test if they are similar enough
to use them interchangeably. No assumptions about distributions can be
made (initial tests clearly show that they are not normal).
Here some res
1) When running ks.test, I am getting the following error after the
test presents its result::
'ks.test(x, y) : cannot compute correct p-values with ties'
I wonder what means and what causes it.
2) Also, how do I calculate an effect size from this statistic?
R.
> In general, the lapply(split(...)) construction should never be used.
Why? What makes it so bad to use?
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting
Hi all,
x <- cbind(rnorm(500),rnorm(500))
KLdiv(x, eps=1e-4)
KLdiv(x, eps=1e-5)
KLdiv(x, eps=1e-6)
KLdiv(x, eps=1e-7)
KLdiv(x, eps=1e-8)
KLdiv(x, eps=1e-9)
KLdiv(x, eps=1e-10)
...
KLdiv(x, eps=1e-100)
...
KLdiv(x, eps=1e-1000)
When calling flexmix::KLdiv using the given code I get results with
in
y you don't have to worry about nuisances such
> as NA padding. Just a thought...
>
> Dennis
>
> On Tue, Aug 3, 2010 at 7:54 PM, Ralf B wrote:
>>
>> Actually it does -- one has to use feed the result back into the
>> original variable:
>>
>> add.col &
col(mydata, c(1,2,3,4),"test1")
mydata <- add.col(mydata, c(1,2,3,4,5,6,7,8),"test2")
mydata
Thanks a lot, David and all others here you made the effort!
Ralf
On Tue, Aug 3, 2010 at 10:37 PM, David Winsemius wrote:
>
> On Aug 3, 2010, at 10:35 PM, David Winsemi
Hi all,
I have a data frame with column over which I would like to run
repeated functions for data analysis. Currently I am only running
recursively over two columns where I column 1 has two states over
which I split and column two has 3 states. The function therefore runs
2 x 3 = 6 times as shown
Hi experts,
I am trying to write a very flexible method that allows me to add a
new column to an existing data frame. This is what I have so far:
add.column <- function(df, new.col, name) {
n.row <- dim(df)[1]
length(new.col) <- n.row
names(new.col) <- name
return(
I am plotting a heatmap using the hist2d function:
require("gplots")
x <- rnorm(2000)
y <- rnorm(2000)
hist2d(x, y, freq=TRUE, nbins=50, col = c("white",heat.colors(256)))
However, I would like to flip the vertical y axis so that the upper
left corner serves as the y-origin. How can I do that?
R
I have to deal with data frames that contain multiple entries of the
same (based on an identifying collumn 'id'). The second collumn is
mostly corresponding to the the id collumn which means that double
entries can be eliminated with ?unique.
a <- unique(data.frame(timestamp=c(3,3,3,5,8), mylabel=
Hi R experts,
I have the following timeseries data:
#example data structure
a <- c(NA,1,NA,5,NA,NA,NA,10,NA,NA)
c <- c(1:10)
df <- data.frame(timestamp=a, sequence=c)
print(df)
where i would like to linearly interpolate between the points 1,5, and
10 in 'timestamp'. Original timestamps should no
I am having a data set that causes flexmix::KLdiv to produce NA as a
result and I was told that increasing the sensitivity of the 'esp'
value can be used to avoid a lot of values being set to a default
(which presumably causes the problem).
Now here my question.
When running KLdiv on a normal dis
With environment I actually meant workspace.
On Thu, Jul 29, 2010 at 1:22 PM, Ralf B wrote:
> Is it possible to remove all variables in the current environment
> through a R command.
>
> Here is what I want:
>
> x <- 5
> y < 10:20
> reset()
> print(x)
> prin
Is it possible to remove all variables in the current environment
through a R command.
Here is what I want:
x <- 5
y < 10:20
reset()
print(x)
print(y)
Output should be NULL for x and y, and not 5 and 10:20.
Can one do that in R?
Best,
Ralf
__
R-help
Hi,
I have distributions from two different data sets and I would like to
measure how similar their distributions (in terms of their bin
frequencies) are. In other words, I am not interested in the exact
sequence of data points but rather in the their distributional
properties and in their similar
I am looking for a mailing list for general statistical questions that
are not R related. Do you have any suggestions for lists that are busy
and helpful and/or lists that you use and recommend?
Thanks in advance,
Ralf
__
R-help@r-project.org mailing li
ability...
Ralf
On Fri, Jul 16, 2010 at 10:41 AM, Peter Ehlers wrote:
> On 2010-07-16 7:56, Ralf B wrote:
>>
>> Hi all,
>>
>> when running KL on a small data set, everything is fine:
>>
>> require("flexmix")
>> n<- 20
>> a<- rn
Hi all,
when running KL on a small data set, everything is fine:
require("flexmix")
n <- 20
a <- rnorm(n)
b <- rnorm(n)
mydata <- cbind(a,b)
KLdiv(mydata)
however, when this dataset increases
require("flexmix")
n <- 1000
a <- rnorm(n)
b <- rnorm(n)
mydata <- cbind(a,b)
KLdiv(mydata)
KL se
I have the following data structure:
n=5
mydata <- data.frame(id=1:n, x=rnorm(n), y=rnorm(n), id=1:n,
x=rnorm(n), y=rnorm(n))
print(mydata)
producing the following represention
id x y id.1 x.1y.1
1 1 0.5326855 -2.076337031 0.7930274 -1.0530558
2 2 0.78889
Hi all,
I wonder why KLdiv does not work with data.frames:
n <- 50
mydata <- data.frame(
sequence=c(1:n),
data1=c(rnorm(n)),
data2=c(rnorm(n))
)
# does NOT work
KLdiv(mydata)
# works fine
dataOnly <- cbind(mydata$data1, mydata$data2, mydata$group)
KLdiv(dataOnly)
Any ide
I am performing some analysis over a large data frame and would like
to conduct repeated analysis over grouped-up subsets. How can I do
that?
Here some example code for clarification:
require("flexmix") # for Kullback-Leibler divergence
n <- 23
groups <- c(1,2,3)
mydata <- data.frame(
I am resending this, as I believe it has not arrived on the mailing
list when I first emailed.
I have a set of labels arranged along a timeframe in a. Each label has
a timestamp and marks a state until the next label. The dataframe a
contains 5 such timestamps and 5 associated labels. This means,
I have a set of labels arranged along a timeframe in a. Each label has
a timestamp and marks a state until the next label. The dataframe a
contains 5 such timestamps and 5 associated labels. This means, on a
continious scale between 1-100, there are 5 markers. E.g. 'abc' marks
the timestampls betwe
gory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
>
>
>> -----Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
>> project.org] On Behalf Of Ralf B
>> Sent: T
Hi all,
I would like to detect all strings in the vector 'content' that
contain the strings from the vector 'search'. Here a code example:
content <- data.frame(urls=c(
"http://www.google.com/search?source=ig&hl=en&rlz=&=&q=stuff&aq=f&aqi=g10&aql=&oq=&gs_r
Given vectors of strings of arbitrary length
content <- c("abc", "def")
searchset <- c("a", "abc", "abcdef", "d", "def", "defghi")
Is it possible to determine the content String set that matches the
searchset in the sense of 'startswith' ? This would be a vector of all
strings in content that sta
risons are slow.
>
> Hadley
>
>
> On Tue, Jul 13, 2010 at 6:52 AM, Ralf B wrote:
>> I am asking this question because String comparison in R seems to be
>> awfully slow (based on profiling results) and I wonder if perhaps '=='
>> alone is not the best one can
or the question. So, to re-phrase my question, are there more
(runtime) effective ways to find out if two strings (about 100-150
characters long) are equal?
Ralf
On Sun, Jul 11, 2010 at 2:37 PM, Sharpie wrote:
>
>
> Ralf B wrote:
>>
>> What is the fastest way to
What is the fastest way to compare two strings in R?
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self
The following code produces a heatmap based on normalized data. I
would like to mirror x and y axis for this plot. Any idea how to do
that?
require("gplots")
x <- rnorm(500)
y <- rnorm(500)
hist2d(x, y, freq=TRUE, nbins=50, col = c("white",heat.colors(256)))
Best,
Ralf
_
f the form R < foo.R you'd need to inspect your system's
> process table (so don't do that).
>
> Hope this helps.
>
> Allan
>
> On 09/07/10 10:48, Ralf B wrote:
>>
>> Is there a way for a script to find out about its own name ?
>>
>>
I am trying to calculate a Kullback-Leibler divergence from two
vectors with integers but get NA as a result when trying to calulate
the measure. Why?
x <- cbind(stuff$X, morestuff$X)
x[1:5,]
[,1] [,2]
[1,] 293 938
[2,] 293 942
[3,] 297 949
[4,] 290 956
[5,] 294 959
KLdiv(x)
I would like to plot some text in a existing plot graph. Is there a
very simple way to do that. It does not need to be pretty at all (just
maybe a way to center it or define a position within the plot). ( ? )
Ralf
__
R-help@r-project.org mailing list
ht
Is there a way for a script to find out about its own name ?
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minim
Parametric regression produces R^2 as a measure of how well the model
predicts the sample and adjusted R^2 as a measure of how well it
models the population. What is the equalvalent for non-parametric
regression (e.g. loess function) ?
Ralf
__
R-help@r-
I have two data sets, each a vector of 1000 numbers, each vector
representing a distribution (i.e. 1000 numbers each of which
representing a frequency at one point on a scale between 1 and 1000).
For similfication, here an short version with only 5 points.
a <- c(8,10,8,12,4)
b <- c(7,11,8,10,5)
Hi experts,
currently developing some code that checks a large amount of Strings
for the existence of sub-strings and pattern (detecting sub-strings
within URLs). I wonder if there is information about how well
particular String operations work in R together with comparisons. Are
there recommenda
Hi,
is there such a thing as a profiler for R that informs about a) how
much processing time is used by particular functions and commands and
b) how much memory is used for creating how many objects (or types of
data structures)? In a way I am looking for something similar to the
java profiler (wh
Are there packages that allow improved String and URL processing?
E.g. extract parts of a URLs such as sub-domains, top-level domain,
protocols (e.g. https, http, ftp), file type based on endings, check
if a URL is valid or not, etc...
I am currently only using split and paste. Are there better an
Hi all,
I have this (non-working) script:
dataTest <- data.frame(col1=c(1,2,3))
new.data <- c(1,2)
name <- "test"
n.row <- dim(dataTest)[1]
length(new.data) <- n.row
names(new.data) <- name
cbind(dataTest, name=new.data)
print(dataTest)
and would like to bind the new column 'new.data' to 'dataTe
curves (??) for both distributions. Am I
something missing here?
|
| * * * *
| *
|*
|*
| *
|*
| *
| * * *
|-
Thanks, Ralf
i*sd )*exp() something like that .
> Sorry about the confusion
>
> Carrie
>
> On Thu, Jun 24, 2010 at 10:43 PM, Ralf B wrote:
>>
>> Hi Carrie,
>>
>> the output is defined by you; density() only creates the function
>> which you need to plot using th
Hi Carrie,
the output is defined by you; density() only creates the function
which you need to plot using the plot() function. When you call
plot(density(x)) you get the output on the screen. You need to use
pdf() if you want to create a pdf file, png() for creating a png file
or postscript if you
I assume R won't easily generate nice reports (unless one starts using
Sweave and LaTeX) but perhaps somebody here knows a package that can
create report like output for special cases? How can I simply plot
output into PDF? Perhaps you know a package I should check out? What
do you guys do to creat
Unfortunately not. I want a qqplot from two variables.
Ralf
On Thu, Jun 24, 2010 at 7:23 PM, Joris Meys wrote:
> Also take a look at qq.plot in the package "car". Gives you exactly
> what you want.
> Cheers
> Joris
>
> On Fri, Jun 25, 2010 at 12:55 AM, Ralf B wrote
2010 at 4:44 PM, Ralf B wrote:
>> I am a beginner in R, so please don't step on me if this is too
>> simple. I have two data sets datax and datay for which I created a
>> qqplot
>>
>> qqplot(datax,datay)
>>
>> but now I want a line that indicates the p
I am a beginner in R, so please don't step on me if this is too
simple. I have two data sets datax and datay for which I created a
qqplot
qqplot(datax,datay)
but now I want a line that indicates the perfect match so that I can
see how much the plot diverts from the ideal. This ideal however is
no
Hi fans,
is it possible for a script to check if a library has been installed?
I want to automatically install it if it is missing to avoid scripts
to crash when running on a new machine...
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch
jep! I forgot to use sep="" for paste and introducted a space in front
of the filename... damn, 1 hour of my life!
Ralf
2010/6/24 Uwe Ligges :
>
>
> On 24.06.2010 19:02, Ralf B wrote:
>>
>> I try to load a file
>>
>> myData<- read.csv(file="C
I try to load a file
myData <- read.csv(file="C:\\myfolder\\mysubfolder\\mydata.csv",
head=TRUE, sep=";")
and get this error:
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'C:\myfolder\mysubfolder\mydata.csv: No such
Unfortunately, I have a lot of errors with RMySQL -- but that is
another thread...
Ralf
On Thu, Jun 24, 2010 at 10:31 AM, James W. MacDonald
wrote:
> Hi Ralf,
>
> Ralf B wrote:
>>
>> Sorry for the lack of details. Since I run the same SQL first directly
>> on M
that you aren't mentioning.
>
> Try comparing CDFs instead of pdfs.
>
> At 03:33 PM 6/23/2010, Ralf B wrote:
>>
>> I am trying to do something in R and would appreciate a push into the
>> right direction. I hope some of you experts can help.
>>
>> I hav
ry runs a lot
faster in the query browser I don't suspect this to be the main
reason.
Any ideas?
Best,
Ralf
On Wed, Jun 23, 2010 at 4:36 PM, James W. MacDonald
wrote:
> Hi Ralf,
>
> Ralf B wrote:
>>
>> I am running a simple SQL SELECT statement that involvs 50k + data
I am running a simple SQL SELECT statement that involvs 50k + data
points using R and the RJDBC interface. I am facing very slow response
times in both the RGUI and the R console. When running this SQL
statement directly in a SQL client I have processing times that are a
lot lot faster (which means
I am trying to do something in R and would appreciate a push into the
right direction. I hope some of you experts can help.
I have two distributions obtrained from 1 datapoints each (about
1 datapoints each, non-normal with multi-model shape (when
eye-balling densities) but other then that
In addition to the previous email:
What plots would you suggest in addition to density / histogram plots
and how can I produce them with R ? Perhaps one of you has an example
?
Thanks a lot,
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.c
Hi all,
I have two very large samples of data (1+ data points) and would
like to perform normality tests on it. I know that p < .05 means that
a data set is considered as not normal with any of the two tests. I am
also aware that large samples tend to lead more likely to normal
results (Andy F
Hi all,
I am suffering from a very slow RJDBC (7 rows of from a simple
select take like 10 minutes). Does anybody know if RMySQL is faster?
Or RODBC in that respect? What are alternatives and what can be done
to get a realistic performance out of MySQL when connected to R's JRI
?
Best,
Ralf
t;
> I haven't check much of what you wrote, so just a blind guess. What about in
> the function's body before cbind():
> names(new.col) <- "more stuff"
> ?
>
> HTH,
> Ivan
>
> Le 6/17/2010 11:09, Ralf B a écrit :
>>
>> Hi all,
>>
&g
Hi all,
I have two distributions / densities (drew density plots and
eye-balled some data). Given that I don't want to make any assumptions
about the data (e.g. normality, existence of certain distribution
types and parameters), what are my options for testing that the
distributions are the same?
Hi all,
probably a simple problem for you but I am stuck.
This simple function adds columns (with differing length) to data frames:
add.col <- function(df, new.col) {
n.row <- dim(df)[1]
length(new.col) <- n.row
cbind(df, new.col)
}
Now I would like to extend that method
Hi all,
I have the following script,which won't plot (tried in RGUI and also
in Eclipse StatET):
library(ggplot2)# for plotting results
userids <- c(1,2,3)
for (userid in userids){
qplot(c(1:10), c(1:20))
}
print ("end")
No plot shows up. If I run the following:
library(ggplot2)
I have a script running in the StatET Eclipse environment that
executes the ggplot2 command qplot in a function:
# Creates the plot
createPlot <- function(){
print("Lets plot!")
qplot(1:10, letters[1:10])
}
When executing the qplot line directly, it works. When executing the
scrip
R Friends,
I have data from which I would like to learn a more general
(smoothened) trend by applying data smoothing methods. Data points
follow a positive stepwise function.
|x
x
|
| xx
|xx
I installed the lattice package, and got an error that R was not able
to remove the previous version of lattice. Now my installation seems
to be currupt, even affecting other packages. I am getting this error
when loading TTR:
> library(TTR)
Loading required package: xts
Loading required package:
When running DEMA(data, 5) on a vector 'data' of length 5, my R engine
stops. Is this function or the R environment facing a bug here or am I
doing something wrong? DEMA should work if the smoothing window size
is the same size as the the data length, right?
(I am working with Eclipse 3.5. and the
1 - 100 of 124 matches
Mail list logo