Warning: I did *not* attempt to follow your query(original or addendum) in
detail. But as you have not yet received a reply, it may be because your
post seems mostly about statistical issues, which are generally off topic
here. This list is primarily about R programming issues. If statistical
issue
As an addendum / erratum to my original post, the second block of code
should read for completeness:
set.seed(02102020)
N=500
M=10
rater=rep(1:M, each = N)
lead_n=as.factor(rep(1:N,M))
a=rep(rnorm(N),M)
z=rep(round(25+2*rnorm(N)+.2*a))
x=a+rnorm(N*M)
y=.5*x+5*a-.5*z+2*rnorm(N*M)
x_cl=rep(aggregate
Thanks; that's a good point. Here is what I have been working with:
library(quanteda)
library(readtext)
texts <- readtext(paste0("/Users/Gordon/Desktop/WPSCASES/", "/word/*.docx"))
And the error message:
Error in list_files(file, ignore_missing, TRUE, verbosity) :
File '' does not exist.
On
Hello all,
I'll level with you: I'm puzzled!
How is it that this constrained regression routine using -pcls- runs
satisfactorily (courtesy of Tian Zheng):
library(mgcv)
options(digits=3)
x.1=rnorm(100, 0, 1)
x.2=rnorm(100, 0, 1)
x.3=rnorm(100, 0, 1)
x.4=rnorm(100, 0, 1)
y=1+0.5*x.1-0.2*x.2+0.3*x
On 02/11/2020 4:46 p.m., Gordon Ballingrud wrote:
Thanks; that's a good point. Here is what I have been working with:
library(quanteda)
library(readtext)
texts <- readtext(paste0("/Users/Gordon/Desktop/WPSCASES/", "/word/*.docx"))
On Windows, you can't have an empty entry in a pathname, so yo
The error probably means what it says. I'm guessing "25 GB available"
is on the hard drive. But the issue is the data must be held in RAM,
and a file >4GB (before de-compression) is quite a lot of RAM on
laptop scales.
Try taking a subset of the data in Stata before importing?
Pat
On Mon, Nov 2
Re-looping R-help. My error.
Hannah, I can't tell you how much RAM your computer has, certainly not how
much is free for R's use. Just that you are probably not going to be able
to load a dataset that Large into a 2017 MacBook.
On Mon, Nov 2, 2020, 3:20 PM Hannah Van Impe
wrote:
> Thank you v
You may get a helpful response, but if not, I'd suggest posting code you
have to read one file. Then lots of people could likely show you how to
modify it to read all 4000 files.
Duncan Murdoch
On 02/11/2020 12:28 p.m., Gordon Ballingrud wrote:
Hello all,
I need some help with loading tex
Hello all,
I need some help with loading text-file data into R for analysis with
packages like koRpus.
The problem I am facing is getting R to recognize a folder full of Word
files (about 4,000) as data which I can then make koRpus perform analyses
like Coleman-Liau indexing. If at all possib
Hello
I have a question about the error: vector memory exhausted (limit reached?). I
have the R-studio version 4.0.3 (2020-10-10).
I have a MacBook Air (13-inch, 2017). I am trying to open a dataset file
‘data_dta’ through the import dataset ‘file from stata’ button.
I already did this with ot
10 matches
Mail list logo