It’s my understanding that docx and xlsx files are zipped containers that have
their data in XML files. You should try unzipping one and examining it with a
viewer. You may then be able to use pkg:XML.
—
David.
> On Jul 1, 2016, at 3:13 PM, Bert Gunter wrote:
>
> No, sorry -- all I would do
See below
On Fri, 1 Jul 2016, Mark Shanks wrote:
Hi,
Imagine the two problems:
1) You have an event that occurs repeatedly over time. You want to
identify periods when the event occurs more frequently than the base
rate of occurrence. Ideally, you don't want to have to specify the
perio
Your question is filled with uh, infelicities. See inline below.
On 02/07/16 07:53, Marietta Suarez wrote:
As of now, I simulate 6 variables and manipulate 2 of them. The way my
syntax is written my final database
Do you mean "data set"?
is in list mode. I need all of it to be
in just 1
I don't have a clue, but I suspect that those who might would be
helped by your providing the output of the sessionInfo() command +
perhaps other relevant info on your computing environment.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and st
No, sorry -- all I would do is search.
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Fri, Jul 1, 2016 at 2:33 PM, John wrote:
> Yes, I have done so
Hint: It's much more efficient not to loop and generate random data in
a single call only once -- then make your samples. (This can even
often be done with different distribution parameters, as in many cases
these can also be vactorized)
Example:
## 1000 random samples of size 100
> set.seed(112
Yes, I have done some search (e.g., tm, markdown, etc), but I can't find
this function.
If you know any package that works for this purpose, that would be quite
helpful.
Thanks,
John
2016-06-28 16:50 GMT-07:00 Bert Gunter :
> Did you try searching before posting here? -- e.g. a web search or on
Just upgraded to R 3.3.1; when I updated the packages on CRAN, I got a BUNCH of
warning messages like the ones below:
2016-07-01 14:44:19.840 R[369:3724] IMKClient Stall detected, *please Report*
your user scenario attaching a spindump (or sysdiagnose) that captures the
problem - (imkxpc_window
As of now, I simulate 6 variables and manipulate 2 of them. The way my
syntax is written my final database is in list mode. I need all of it to be
in just 1 database. any help would be MUCH appreciated. Here's my syntax:
fun=function(n, k,rep){
#prepare to store data
data=matrix(0,nrow=10*k, n
> On Jul 1, 2016, at 2:11 AM, Giles Bischoff wrote:
>
> So, I uploaded a data set via my directory using the command data <-
> data.frame(read.csv("hw1_data.csv")) and then tried to subset that data
> using logical operators. Specifically, I was trying to make it so that I
> got all the rows in
You might look at the package
Wakefield
For data generation
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Marietta Suarez
Sent: Friday, July 01, 2016 1:28 PM
To: r-help@r-project.org
Subject: [R] trouble double looping to generate data for a meta-
i'm trying to generate data for a meta analysis. 1- generate data following
a normal distribution, 2- generate data following a skewed distribution, 3-
generate data following a logistic distribution. i need to loop this
because the # of studies in each meta will be either 10 or 15. k or total
numb
You may need to re-read the Intro to R.
data[data$Ozone > 31,]
or
subset(data, Ozone > 31)
Jim Holtman
Data Munger Guru
What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.
On Fri, Jul 1, 2016 at 5:11 AM, Giles Bischoff
wrote:
> So, I up
Apologies for the long post. This is an issue I have been struggling with
and I have tried to be as complete, to the point, and reproducible as
possible.
In documenting a package with roxygen2, I have come across an error that
does not occur in R 3.2.4 revised, but does occur in R 3.3.0 and 3.3.1.
Mark,
I did something similar a couple of year ago by coding non-events as 0,
positive events as +1 and negative events as -1 then summing the value
through time. In my case the patterns showed up quite clearly and I used
other criteria to define the actual periods.
Clint
Clint Bowman
Hi,
Sincere apologies if my mail is inappropriate for this mailbox. Please
ignore the mail.
*Issue*
I am getting error "*first argument is not an open RODBC channel*" when I
publish my application on IIS. It runs perfectly under Visual Studio
development mode and the script runs fine on R Consol
So, I uploaded a data set via my directory using the command data <-
data.frame(read.csv("hw1_data.csv")) and then tried to subset that data
using logical operators. Specifically, I was trying to make it so that I
got all the rows in which the values for "Ozone" (a column in the data set)
were grea
Thank you for all your answers and I will take a look to the 'propagate'
package.
Ps: first time I am participating to a mailing list, I hope I answer to the
right emails.
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of DIGHE,
NILESH [AG/2362]
Sent: jeu
Inline.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Fri, Jul 1, 2016 at 7:40 AM, Witold E Wolski wrote:
> Hi William,
>
> I tested plyrs dlp
Hi,
I think this is substantially less ugly:
A <- matrix(1:15,nrow=5,byrow=F); A
a <- c(1,2,3)
B <- sweep(A, 2, a, "^")
apply(B, 1, prod)
You could combine it into one line if you wanted, but I find it clearer as two:
> apply(sweep(A, 2, a, "^"), 1, prod)
[1] 47916 169344 421824 889056 1
Hi William,
I tested plyrs dlply function, and it seems to have have an O(N*
log(R)) complexity (tested for R=N) so I do not know if N is the
number of rows or nr of categories.
For the data.frame example with 2e5 rows and 2e5 categories it is
approx. 10 times faster than split. Still, it is 10
On Fri, 1 Jul 2016, Faradj Koliev wrote:
Dear Achim Zeileis,
Many thanks for your quick and informative answer.
I?m sure that the vcovCL should work, however, I experience some problems.
> coeftest(model, vcov=vcovCL(model, cluster=mydata$ID))
First I got this error:
Error in vcovCL(mod
A is a 5 x 3 matrix and a is a 3-vector. I like to exponentiate A[,1] to
a[1], A[,2] to a[2], and A[,3] to a[3], and obtain the product of the
resulting columns, as in line 3.
I also accomplish this with lines 4 and 5. I like to have rowProducts(B)
but there is not so I came up with something u
Dear Achim Zeileis,
Many thanks for your quick and informative answer.
I’m sure that the vcovCL should work, however, I experience some problems.
> coeftest(model, vcov=vcovCL(model, cluster=mydata$ID))
First I got this error:
Error in vcovCL(model, cluster = mydata$ID) :
length of 'cl
Hi Lily,
I think below codes can work:
f<- list.files("D:/output/test/your folde
rname",full.names=TRUE,recursive=TRUE)
files<- grep(".csv", f)
files_merge<- data.frame()
for (i in 1:length(f[files])){
data<- read.csv(file=f[files][i],header=TRUE, sep=",")
files_merge<- rbind(files_merge,
On Fri, 1 Jul 2016, Faradj Koliev wrote:
Dear all,
I use ?polr? command (library: MASS) to estimate an ordered logistic regression.
My model: summary( model<- polr(y ~ x1+x2+x3+x4+x1*x2 ,data=mydata, Hess =
TRUE))
But how do I get robust clustered standard errors?
I??ve tried coeftest
Dear all,
I use ”polr” command (library: MASS) to estimate an ordered logistic regression.
My model: summary( model<- polr(y ~ x1+x2+x3+x4+x1*x2 ,data=mydata, Hess =
TRUE))
But how do I get robust clustered standard errors?
I’’ve tried coeftest(resA, vcov=vcovHC(resA, cluster=lipton$ID)
Hello,
Maybe something like this.
fls <- list.files(pattern = "*.csv")
dat.list <- lapply(fls, read.csv)
dat <- do.call(rbind, dat.list)
Hope this helps,
Rui Barradas
Citando lily li :
> Hi R users,
>
> I'd like to ask that how to merge several datasets into one in R? I put
> these csv file
28 matches
Mail list logo