Hi Hesham,
It think you are looking for something like this:
truth<-data.frame(G1=sample(LETTERS[1:4],20,TRUE),
G2=sample(LETTERS[1:4],20,TRUE))
truth
truth$G3<-as.numeric(truth$G1 == truth$G2)
truth
Note that like quite a few emails produced with Javascript formatting,
there are embedded charac
On Wed, 2 Sep 2020 16:31:53 -0500
David Jones wrote:
> Thank you Uwe, John, and Bert - this is very helpful context.
>
> If it helps inform the discussion, to address John and Bert's
> questions - I actually had less memory free when I originally ran the
> analyses and saved the workspace, than
Please re-post in plain text. This is a plain text list and html can get
messed up, as here.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Sep 2, 2020 a
hello.I have this code :#3#read data just thee
columns. first and second columns are catogary , third column is number.
out<-read.csv("outbr.csv")
truth<-out[,seq(1,2)] #truth about 2000 rows, some values in row1 can show in
rows2,and the some values in row2 can
A very simple search (= "CRAN NOAA .grb2") and small bit of reading help
files suggests that you might want wgrib2 and rNOMADS
https://rdrr.io/cran/rNOMADS/man/GribInfo.html
https://www.cpc.ncep.noaa.gov/products/wesley/wgrib2/
--
David
On 9/2/20 5:57 PM, Sarah Goslee wrote:
GDAL supports G
GDAL supports GRIB2 so it should be easy using rgdal and raster packages.
Sarah
On Wed, Sep 2, 2020 at 8:32 PM Philip wrote:
>
> Any advise about how to get NOAA .grb2 files into R?
>
> Thanks.
--
Sarah Goslee (she/her)
http://www.numberwright.com
__
Any advise about how to get NOAA .grb2 files into R?
Thanks.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
You need more RAM to load this file. As the memory was being used in your
original file, certain objects (such as numeric columns) were being shared
among different higher-level objects (such as data frames). When serialized
into the file those optimizations were lost, and now those columns are
David,
If the ".Rdata" contains more than one object you could (and maybe should
use) the SOAR package (from Venables). This package helps you to split the
objects over multiple RData files. It's useful when you have numerous
medium-large objects in the workspace but doesn't use then at the same
t
Thank you Uwe, John, and Bert - this is very helpful context.
If it helps inform the discussion, to address John and Bert's
questions - I actually had less memory free when I originally ran the
analyses and saved the workspace, than when I read in the data back in
later on (I rebooted in an attemp
R experts may give you a detailed explanation, but it is certainly possible
that the memory available to R when it wrote the file was different than
when it tried to read it, is it not?
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it.
On Wed, 2 Sep 2020 13:36:43 +0200
Uwe Ligges wrote:
> On 02.09.2020 04:44, David Jones wrote:
> > I ran a number of analyses in R and saved the workspace, which
> > resulted in a 2GB .RData file. When I try to read the file back
> > into R
>
> Compressed in RData but uncompressed in main memor
You are right Jeff, that was a mistake, I was focusing on the square root
and made the mistake of talking about taking the square root instead of
raising to the 2nd power.
This is the example I was following (
https://www.youtube.com/watch?v=SaQgA6V8UA4). Of course, I tried fitting
the nnet model
The problem seems to be the fit rather than the predictions. Looks like nnet is
happier with data between 0 and 1, witness
Fit <- nnet(y/max(y) ~ x, a, size = 5, maxit = 1000, lineout = T, decay = 0.001)
plot(y/max(y)~x,a)
lines(fitted(Fit)~x,a)
> On 2 Sep 2020, at 16:21 , Paul Bernal wrote:
>
Why would you expect raising y_pred to the 0.5 to "backtransform" a model
sqrt(y)~x? Wouldn't you raise to the 2?
Why would you "backtransform" x in such a model if it were never transformed in
the first place? Dr Maechler did not suggest that.
And why are you mentioning some random unspecified
Dear Dr. Martin and Dr. Peter,
Hope you are doing well. Thank you for your kind feedback. I also tried
fitting the nnet using y ~ x, but the model kept on generating odd
predictions. If I understand correctly, from what Dr. Martin said, it would
be a good idea to try modeling sqrt(y) ~ x and then
On 02.09.2020 04:44, David Jones wrote:
I ran a number of analyses in R and saved the workspace, which
resulted in a 2GB .RData file. When I try to read the file back into R
Compressed in RData but uncompressed in main memory
later, it won't read into R and provides the error: "Error:
Dear all
I would like to ask if augPred is able to handle missing values. Here is
example with below data "test". I read augPred documentation and nothing is
mentioned that fitted object from data with missing values cannot be used in
augPred. Maybe it would be worth to add something.
Or I just
> peter dalgaard
> on Wed, 2 Sep 2020 08:41:09 +0200 writes:
> Generically, nnet(a$y ~ a$x, a ...) should be nnet(y ~ x,
> data=a, ...) otherwise predict will go looking for a$x, no
> matter what is in xnew.
> But more importantly, nnet() is a _classifier_,
> s
19 matches
Mail list logo