Can you tell us what is wrong with the "chunked" package which comes up when you Google "r read large file in chunks"?
On November 8, 2024 4:58:18 PM PST, Val <valkr...@gmail.com> wrote: >Hi All, > >I am reading data file ( > 1B rows) and do some date formatting like > dat=fread(mydatafile) > dat$date1 <- as.Date(ymd(dat$date1)) > >However, I am getting an error message saying that > Error: cons memory exhausted (limit reached?) > >The script was working when the number rows were around 650M. > >Is there another way to handle a big data set in R? > > >Thank you. > >______________________________________________ >R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >https://stat.ethz.ch/mailman/listinfo/r-help >PLEASE do read the posting guide https://www.R-project.org/posting-guide.html >and provide commented, minimal, self-contained, reproducible code. -- Sent from my phone. Please excuse my brevity. ______________________________________________ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide https://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.