Hello Morcus,
Is your question really about language inter-operability?
If so, have you checked out rJava?
"rJava: Low-Level R to Java Interface"
https://CRAN.R-project.org/package=rJava
http://www.rforge.net/rJava/
Regards,
Bill.
W. Michels, Ph.D.
On Mon, Oct 30, 2017 at 8:10 AM, Morkus vi
Maingo,
See previous discussion below on rbind.na() and cbind.na() scripts:
https://stat.ethz.ch/pipermail/r-help/2016-December/443790.html
You might consider binding first then adding orthogonally.
So rbind.na() then colSums(), OR cbind.na() then rowSums().
Best of luck,
W Michels, Ph.D.
O
Hi, I was able to get Eivind's code to work by slight modification of
the "grab" function:
grab <- function() {
grid.echo()
grid.grab()
}
Best Regards,
W. Michels, Ph.D.
On Fri, May 18, 2018 at 9:56 AM, Eivind K. Dovik wrote:
> On Fri, 18 May 2018, Ed Siefker wrote:
>
>> I have dose r
Hello,
For introductory material there is--of course--Immer's Barley Data
(popularized by Bill Cleveland), and used extensively in R to
demonstrate lattice graphics:
>library(lattice)
>?barley
Note the example dotplot() at the bottom of the "barley" help page,
and also on the "barchart" help pag
Hello, In addition to Duncan Mackay's excellent suggestion, I would
recommend Bert Gunter's "stripless" package, for high-density
Trellis-type conditioning plots. See the vignette for examples, and
try out the code for "earthquake" and "barley" plots from the
reference manual.
https://CRAN.R-proje
I'm sure there are more efficient ways, but this works:
> test1 <- matrix(runif(50), nrow=10, ncol=5)
> ## test1 <- as.data.frame(test1)
> test1 <- rbind(test1, NA)
> test1[11, c(1,3)] <- colSums(test1[1:10,c(1,3)])
> test1
HTH,
Bill.
William Michels, Ph.D.
On Fri, Mar 31, 2017 at 9:20 AM,
Again, you should always copy the R-help list on replies to your OP.
The short answer is you **shouldn't** replace NAs with blanks in your
matrix or dataframe. NA is the proper designation for those cell
positions. Replacing NA with a "blank" in a dataframe will convert
that column to a "characte
er, one can
>> usually use an existing function to push your results out without damaging
>> your working data.
>>
>> It is important to separate your data from your output because mixing
>> results (totals) with data makes using the data further extremely difficu
I believe the lubridate package does a good job with time zones.
> install.packages("lubridate")
> library(lubridate)
Look at the supplied functions with_tz() and force_tz().
HTH,
Bill.
William J. Michels, Ph.D.
On Fri, Apr 7, 2017 at 12:52 AM, Jeff Newmiller
wrote:
> R does a poor job
Hi Brad,
Some of the debugging functions may be of use. You can look at trace()
or setBreakpoint(). But I believe Bert is correct in saying your
concept of a "Line Number" and R's concept of a "Line Number" will
differ.
Finally, you can look at the function findLineNum(), which can be
called exte
For a base-R installation, you can print out multiple help pages
(function indices) like so:
> for(i in 1:length(sessionInfo()$basePkgs)) {
print(library(help = sessionInfo()$basePkgs[i], character.only = TRUE)) }
HTH,
Bill.
William Michels, Ph.D.
On Mon, Apr 10, 2017 at 11:15 AM, Doran, Har
Hi Chris (and Sarah),
Chris you've listed a lot of restrictions, but I just wanted to
mention Jeroen Ooms' work developing OpenCPU:
"The OpenCPU system exposes an http API for embedded scientific
computing with R. The server can run either as a single-user
development server within the interactiv
Have you tried R-GUI, in the R-distribution available below?
https://cran.r-project.org/bin/macosx/
Here's a similar question on SO:
http://stackoverflow.com/questions/13476736/r-lapack-routines-cannot-be-loaded
HTH, Bill.
William Michels, Ph.D.
On Tue, May 2, 2017 at 11:51 AM, Assa Yeroslav
Dear Lily,
Harold is telling you to type "?round" at the R command prompt to pull
up the "round" help page.
>?round
>help("round")
AFAIK, the above two commands are equivalent, in general.
Best, Bill.
W. Michels, Ph.D.
On Tue, May 9, 2017 at 8:11 AM, Doran, Harold wrote:
> ?round
>
>
> Fro
Looking below and online, R's truth tables for NOT, AND, OR are
identical to the NOT, AND, OR truth tables originating from Stephen
Cole Kleene's "strong logic of indeterminacy", as demonstrated on the
Wikipedia page entitled, "Three-Valued Logic"--specifically in the
section entitled "Kleene and P
Evaluation of the NOT, AND, OR logical statements below in MySQL
5.5.30-log Community Server (GPL) replicate R's truth tables for NOT,
AND, OR. See MySQL queries (below), which are in agreement with R
truth table code posted in this thread:
bash-3.2$ mysql
Welcome to the MySQL monitor. Commands
Hi Lily,
You're on the right track, but you should define a new column first
(filled with NA values), then specify the precise rows and columns on
both the left and right hand sides of the assignment operator that
will be altered. Luckily, this is pretty easy...just remember to use
which() on the
ou have to
> specify the data.frame you want to look into.
>
> And last, learn to use dput() to provide a nice reproducible example.
>
> HTH,
> Ivan
>
>
> --
> Dr. Ivan Calandra
> TraCEr, Laboratory for Traceology and Controlled Experiments
> MONREPOS Archaeolog
We'll need more information on the packages you're using. Can you post the
output of:
> sessionInfo()
Finally, is this a Bioconductor question? They have their own support site:
https://support.bioconductor.org
HTH,
William Michels, Ph.D.
On Sat, Jun 17, 2017 at 11:05 AM, Jeff Newmiller
wro
Hi Mogjib,
Does the following solve your issue?
> setwd(WD)
On Sat, Jun 17, 2017 at 7:26 AM, Mogjib Salek wrote:
> Hi all,
>
> I am learning R by "doing". And this is my first post.
>
> I want to use R: 1- to fetch a DNA sequence from a databank (see bellow)
> and 2- store it as FASTA file.
Hi Michael, is this the direction you'd like to go (simplified)?
?pairs
pairs(iris, log="xy", asp=1, gap=0.1)
--Bill.
On Tue, Jul 19, 2016 at 2:37 PM, Michael Young wrote:
> I want to make this as easy as possible. The extra space could just go
> around the plot in the margin area. I could
Hi Bogdan,
Are you saying you want to drop columns that sum to zero? If so, I'm
not sure you've given us a good example dataframe, since all your
numeric columns give non-zero sums.
Otherwise, what you're asking for is trivial. Below is an example
dataframe ("ygene") with an example "AGA" column
Perhaps one of the following two methods:
> zgene = data.frame( TTT=c(0,1,0,0),
+TTA=c(0,1,1,0),
+ ATA=c(1,0,0,0),
+ ATT=c(0,0,0,0),
+ row.names=c("gene1", "gene2", "gene3", "gene4"))
> zgene
TTT TTA ATA ATT
gene1 0 0 1
Hello Bernard,
You might consider using the "readxl" package, which (from the package
description), "Works on Windows, Mac and Linux without external
dependencies."
https://CRAN.R-project.org/package=readxl
HTH, Bill.
William Michels, Ph.D.
__
R-hel
So you're saying rowMeans(cbind(matrix_a, matrix_b)) worked to obtain
your X-axis values?
Wild guess here, are you simply looking for:
colMeans(rbind(matrix_a, matrix_b)) to obtain your Y-axis values?
[Above assuming matrix_a and matrix_b have identical dimensions (nrow, ncol)].
--Bill
William
> str_1 <- list("pc_m2_45_ssp3_wheat", "pc_m2_45_ssp3_wheat", "ssp3_maize",
> "m2_wheat")
> str_2 <- strsplit(unlist(str_1), "_")
> max.length <- max(sapply(str_2,length))
> str_3 <- lapply(lapply(str_2, unlist), "length<-", max.length)
> str_3
See:
http://stackoverflow.com/questions/27995639/i-
Hello Christofer!
For text-editing the R.app GUI has always been fabulous. An old
mainstay on the Mac (after Apple's TextEdit) has been TextWrangler,
and its big-brother, BBEdit. For development RStudio is quite nice,
and--based partly on RStudio's offering of a Vim-compatibility
mode--Vim has bec
Run Atom with the language-r and r-exec packages:
"A language description and snippets for R"
https://atom.io/packages/language-r
"Send R code to various consoles"
https://atom.io/packages/r-exec
On Thu, Jan 21, 2016 at 9:54 AM, boB Rudis wrote:
> Here you go Ista: https://atom.io/packages/rep
On Tue, Apr 12, 2016 at 9:44 AM, David Winsemius wrote:
>
> There need to be more worked examples, but those could easily be mined from
> problems submitted as recorded in the R-help Archives and StackOverFlow.
>
This sounds like a great opportunity for R-users to contribute to the
community
1. It's not immediately clear why you need the line "temp <- subset(df, id
== myid)"
2. The objects described by "temp$age", temp$agesmoke, and temp$yrsquit are
all vectors. So temp.yrssmoke is also a vector. This means that when you
replace, it should be with "<- temp.yrssmoke[i]", where "i" is t
You should review "The Recycling Rule in R" before attempting to
perform functions on 2 or more vectors of unequal lengths:
https://cran.r-project.org/doc/manuals/R-intro.html#The-recycling-rule
Most often, the "Recycling Rule" does exactly what the researcher
intends (automatically). And in many
Hi Dmitri,
> hoyt <- unlist(strsplit("how are you today", split="\\s"))
> y <- list()
> for(j in seq_along(hoyt)) y[[j]] <- sapply(combn(length(hoyt), j,
> simplify=F, function(i) hoyt[i]), paste, collapse = " ")
> y
[[1]]
[1] "how" "are" "you" "today"
[[2]]
[1] "how are" "how you" "
Hi Marc,
I can't seem to get "\n" to work, but simply using c() and "y.intersp
= 1" looks fine:
> plot(1, 1)
> v1 <- c(expression(italic("p")*"-value"), expression("based on
> "*italic("t")*"-test"))
> legend("topright", legend=v1, y.intersp = 1, bty="n")
Hope this helps,
Bill
William Mich
Hi Bryan (and Petr),
If you want to write tsv-style data from R to clipboard on a Mac (e.g.
for pasting into Numbers), you should do:
> x1 <- matrix(1:6, nrow =2)
> clip <- pipe("pbcopy", "w")
> write.table(x1, file=clip, sep = "\t", row.names = FALSE, fileEncoding =
> "UTF-8" )
> close(clip)
>
Hi Marc, I think it would be wrong to leave readers with the
impression that it's somehow improper to use c() in drawing a legend,
because in fact, it works so well. What doesn't work so well is mixing
expression() calls with escaped characters like "\n" (or "\r"), and
that's probably due to expres
Hi José (and Rolf),
It's not entirely clear what type of 'whitespace' you're referring to,
but if you're using read.table() or read.csv() to create your
dataframe in the first place, setting 'strip.white = TRUE' will remove
leading and trailing whitespace 'from unquoted character fields
(numeric f
Hello Wolfgang,
Building on Peter Dalgaard's code, are you just trying to take a sample of
a random column from each row? You don't need to use apply:
> array[cbind(1:nrow(array), sample.int(ncol(array), nrow(array),
replace=TRUE ))]
Just a general note, since you're sampling one-column-per-row
Hi Fred, I believe the preferred package is jsonlite:
https://cran.r-project.org/package=jsonlite
https://jeroen.cran.dev/jsonlite/index.html
HTH, Bill.
W. Michels, Ph.D.
On Tue, Sep 15, 2020 at 1:48 PM Fred Kwebiha wrote:
>
> Source=https://jsonformatter.org/e038ec
>
> The above is nested jso
Hi Philip,
You've probably realized by now that R doesn't like column names that
start with a number. If you try to access an R-dataframe column named
2B or 3B with the familiar "$" notation, you'll get an error:
> library(DBI)
> library(RSQLite)
> con2 <- dbConnect(SQLite(), "~/R_Dir/lahmansbase
Hi Philip,
"Perl Download"
https://www.perl.org/get.html
The above link gives you the option to install from source or from
ActiveState. The first link below (source) proudly proclaims, "Perl
compiles on over 100 platforms..." and the second link below (binary)
similarly proclaims, "Perl supports
Dear Jeff,
Rather than diff-ing a linear vector you're trying to diff values from
two different rows. Also you indicate that you want to place the
diff-ed value in the 'lower' row of a new column. Try this (note
insertion of an initial "zero" row):
> df <- data.frame(ID=1:5,Score=4*2:6)
> df1 <-
More correctly, with an initial "NA" value in the "diff" column:
> df <- data.frame(ID=1:5,Score=4*2:6)
> df1 <- rbind(c(0,0), df)
> cbind(df1, "diff"=c(NA, diff(df1$Score)) )
ID Score diff
1 0 0 NA
2 1 88
3 2124
4 3164
5 4204
6 5244
>
HTH, Bi
Hi Roger,
You could look at the attributes() function in base-R. See:
> ?attributes
>From the help-page:
> ## strip an object's attributes:
> attributes(x) <- NULL
HTH, Bill.
W. Michels, Ph.D.
On Sat, Apr 10, 2021 at 4:20 AM Koenker, Roger W wrote:
>
> Wolfgang,
>
> Thanks, this is _extre
Hi Troels,
Have you considered using Lattice graphics?
Adapting from examples on the help page:
> ?histogram()
> histogram( ~ BC | pH, data = ddd, type = "density",
xlab = "BC", layout = c(1, 3), aspect = 0.618,
strip = strip.custom(strip.levels=c(TRUE,TRUE)),
panel = function(x, ...) {
> i <- 1L; span <- 1:100; result <- NA;
> for (i in span){
+ ifelse(i %% 2 != 0, result[i] <- TRUE, result[i] <- FALSE)
+ }
> span[result]
[1] 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
45 47 49 51 53 55 57
[30] 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99
try to work within the paradigms that are the
> language's strengths when possible, R's vectorization and indexing in this
> example.
>
> Cheers,
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along and
> sticking things
Maybe something like this?
> df_A <- data.frame(names=LETTERS[1:10], values_A=1:10)
> df_B <- data.frame(names=LETTERS[6:15], values_B=11:20)
> df_AB <- merge(df_A, df_B, by="names")
> df_AAB <- merge(df_A, df_AB, all.x=TRUE)
> df_BAB <- merge(df_B, df_AB, all.x=TRUE)
> df_C <- df_AAB[is.na(df_AAB
Hi Phillips,
Maybe these examples will be useful:
> vec <- c("a","b","c","d","e")
> vec[c(1,1,1,0,0)]
[1] "a" "a" "a"
> vec[c(1,1,1,2,2)]
[1] "a" "a" "a" "b" "b"
> vec[c(5,5,5,5,5)]
[1] "e" "e" "e" "e" "e"
> vec[c(NA,NA,NA,0,0,0,0)]
[1] NA NA NA
> vec[c(NA,NA,NA,1,1,1,1)]
[1] NA NA NA "a" "a"
Hello,
You may have more luck posting your question to the R-SIG-Geo mailing-list:
https://stat.ethz.ch/mailman/listinfo/R-SIG-Geo/
Be sure to use an appropriate "Subject" line, for example, the
particular package/function that seems problematic.
HTH, Bill.
W. Michels, Ph.D.
On Mon, Aug 30,
Hi,
I found package "npsm" at the links below:
https://mran.microsoft.com/snapshot/2017-02-04/web/packages/npsm/index.html
https://cran.r-project.org/src/contrib/Archive/npsm/
HTH, Bill.
W. Michels, Ph.D.
On Wed, Sep 1, 2021 at 8:27 AM wrote:
>
> I need to install the package "npsm" to follo
Hello Anas,
You can find courses and/or training materials on the Ensembl/EBI
websites, including R code:
https://www.ebi.ac.uk/training/online/courses/ensembl-rest-api/
http://training.ensembl.org/
You can also click on individual 'Ensembl REST API Endpoints', and
find sample R code there:
htt
Hello John,
Others have commented on the first half of your question, but the
second half of your question looks very much like R's built-in
predict() functions:
>?predict
>?predict.lm
Best Regards,
Bill.
W. Michels, Ph.D.
On Wed, May 8, 2019 at 6:23 PM Sorkin, John wrote:
>
> Can someone
Morning Bill, I take it this is dplyr? You might try:
tmp1 <- HCPC %>%
group_by(HCPCSCode) %>%
summarise(Avg_AllowByLimit =
mean(Avg_AllowByLimit[which(Avg_AllowByLimit!=0 & AllowByLimitFlag ==
TRUE)]))
The code above gives "NaN" for cases where AllowByLimitFlag == FALSE.
Maybe this is the answer
Hi Val, see below:
> dat1 <-read.table(text="ID, x, y, z
+ A, 10, 34, 12
+ B, 25, 42, 18
+ C, 14, 20, 8 ",sep=",",header=TRUE,stringsAsFactors=F)
>
> dat2 <-read.table(text="ID, weight
+ A, 0.25
+ B, 0.42
+ C, 0.65 ",sep=",",header=TRUE,stringsAsFactors=F)
>
> dat3 <- data.frame(ID = dat1[,
The best summary I've read on the subject of R's scoping rules (in
particular how they compare to scoping rules in S-PLUS) is Dr. John
Fox's "Frames, Environments, and Scope in R and S-PLUS", written as an
Appendix to the first edition of his book, An R and S-PLUS Companion
to Applied Regression (2
Hi Martin,
'--no-echo'
or
'--no_echo'
Obviously you may prefer the first, but I hope you might consider the second.
Best Regards,
W. Michels, Ph.D.
On Fri, Sep 27, 2019 at 9:04 AM Martin Maechler
wrote:
>
> > Martin Maechler
> > on Mon, 23 Sep 2019 16:14:36 +0200 writes
Apologies, Duncan and Martin. I didn't check "R --help" first. You're
quite right, lots of embedded hyphens.
Best Regards, Bill.
W. Michels, Ph.D.
On Fri, Sep 27, 2019 at 2:42 PM Duncan Murdoch wrote:
>
> On 27/09/2019 5:36 p.m., William Michels via R-help wrote:
&g
Hello,
I expected the code you posted to work just as you presumed it would,
but without a reproducible example--I can only speculate as to why it
didn't.
In the t1 dataframe, if indeed you only want to remove rows of the
t1$sex_chromosome_aneuploidy_f22019_0_0 column which are undefined,
you cou
Apologies Ana, Of course Rui and Herve (and Richard) are correct here
in stating that NA values get 'carried through' when selecting using
the "==" operator.
To give an illustration of what (I believe) Herve means by "NAs
propagating", here's a small 11 x 8 dataframe ("zakaria") posted to
R-Help l
Apparently, the iNEXT package was first described in an academic paper
published in 2016, although CRAN archives go back to 2015.
http://chao.stat.nthu.edu.tw/wordpress/paper/120_pdf_appendix.pdf
https://cran.r-project.org/src/contrib/Archive/iNEXT/
The vignette below has a section entitled "Gener
Hi Phillip,
Jim and David and Petr all wrote you good code, but you have major
problems in data formatting. Your data uses spaces both as a column
separator and also to denote "blank fields". Because of problems with
your input data structure, it's doubtful whether the good code you've
received wi
Hi Phillip,
I wanted to follow up with you regarding your earlier post. Below is a
different way to work up your data than I posted earlier.
I took the baseball data you posted, stripped out
leading-and-following blank lines, removed all trailing spaces on each
line, and removed the "R1", "R2" an
Hi Val,
Here's an answer using a series of ifelse() statements. Because the d4
column is created initially using NA as a placeholder, you can check
your conditional logic at the end using table(!is.na(dat2$d4)):
> dat2 <-read.table(text="ID d1 d2 d3
+ A 0 25 35
+ B 12 22 0
+ C 0 0 31
+ E 10 2
Hello,
Have you tried alternative methods of pre-processing your data, such
as simply calling scale()? What is the effect on convergence, for both
the caret package and and the neuralnet package? There's an example
using scale() with the neuralnet package at the link below:
https://datascienceplu
Hi William,
It's not clear to me why you need this particular older version of
MCMCpack. From the archive I find MCMCpack_1.2-4 dates back to
2012-06-14, and MCMCpack_1.2-4.1 dates back to 2013-04-07:
MCMCpack_1.2-4.1.tar.gz 2013-04-07 00:05 481K
MCMCpack_1.2-4.tar.gz 2012-06-14 12:36 482K
Have
You can try installing the mcmc package first:
https://cran.r-project.org/web/packages/mcmc/index.html
https://cran.r-project.org/src/contrib/Archive/mcmc/
I've used mcmc_0.9-5 with MCMCpack_1.4-3 under R version 3.3.3.
HTH, Bill.
W. Michels, Ph.D.
On Wed, Dec 4, 2019 at 2:52 PM Prophet, Will
I'm a big fan of the sqldf package by Gabor Grothendieck:
"sqldf: Manipulate R Data Frames Using SQL"
https://CRAN.R-project.org/package=sqldf
The sqldf "README.html" converts to a 42 page PDF:
https://cran.r-project.org/web/packages/sqldf/readme/README.html
You can also find favorable blog post
Hi Paul,
Since you start from strings, it's not clear to me where ASCII enters
the picture. If you really need ASCII, you can use the charToInt()
function in the "R.oo" package. Also there's the AsciiToInt() function
in the "sfsmisc" package. If you just want to use R's native
as.numeric() conver
Hi Jeff,
You might have better luck posting your question on the R-SIG-Geo
mailing list, or perusing their archive. I've found a thread
pertaining to the rnoaa package from August 2016, along with a
particularly informative reply (reply link below):
https://stat.ethz.ch/mailman/listinfo/R-SIG-Geo
Hi Phillip,
Skipping to the last few lines of your email, did you download a
program to look at Sqlite databases (independent of R) as listed
below? Maybe that program ("DB Browser for SQLite") and/or the
instructions below can help you locate your database directory:
https://datacarpentry.org/se
Hi David,
Often on a Mac you can "right click" (or on a laptop--press down with
two fingers), and a pop-up will give you the option to "Copy File
Path". (You can also find this option in a Finder window under the
"Finder -> Services" menu bar) .This is the path you should use to
import your file i
Hi Phillip,
Generally these problems come down to knowing/setting your working
directory. The first question is whether you have a directory named
"data" inside your "C:/Users/Owner/Documents" directory? You may need
to create this directory first, outside of R and/or RStudio (using
your Windows O
Dear Ista (and Phillip),
Ista, that's the exact same advice I gave Phillip over a week ago:
https://stat.ethz.ch/pipermail/r-help/2020-March/465994.html
Phillip, it doesn't make sense to post the same question under
different subject headings. While I'm convinced you're making a
sincere effort t
Hi Ivan,
Like Ivan Krylov, I'm not aware of circumstances for simple dataframes
where ncol(DF) does not equal length(DF).
As I understand it, using ncol() versus length() is important when
you're examining an object returned from a function like sapply(),
since sapply() will simplify one-column d
Hi Phillip,
You have two choices here: 1. Manually enter the missing rows into
your individual.df using rbind(), and cbind() the overall.df and
individual.df dataframes together (assuming the rows line up
properly), or 2. Use merge() to perform an SQL-like "Left Join", and
copy values from the "ov
Hi Laurent,
Thank you for explaining your size limitations. Below is an example
using the read.fwf() function to grab the first column of your input
file (in 2000 row chunks). This column is converted to an index, and
the index is used to create an iterator useful for skipping lines when
reading
Apologies, Laurent, for this two-part answer. I misunderstood your
post where you stated you wanted to "filter(ing) some
selected lines according to the line name... ." I thought that meant
you had a separate index (like a series of primes) that you wanted to
use to only read-in selected line num
Dear Laurent,
I'm going through your code quickly, and the first question I have is
whether you loaded the "gmp" library?
> library(gmp)
Attaching package: ‘gmp’
The following objects are masked from ‘package:base’:
%*%, apply, crossprod, matrix, tcrossprod
> library(iterators)
> iter(1
Hi Laurent,
Seeking to give you an "R-only" solution, I thought the read.fwf()
function might be useful (to read-in your first column of data, only).
However Jeff is correct that this is a poor strategy, since read.fwf()
reads the entire file into R (documented in "Fixed-width-format
files", Secti
Strike that one sentence in brackets: "[In point of fact, the R Data
Import/Export Manual suggests using perl]", to pre-process data before
loading into R. The manual's recommendation only pertains to large
fixed width formatted files [see #1], whereas Laurent's data is
whitespace-delimited:
> rea
Hi Laurent,
Off the bat I would have guessed that the problem you're seeing has to
do with 'command line quoting' differences between the Windows system
and the Linux/Mac systems. I've noticed people using Windows having
better command line success with "exterior double-quotes / interior
single-qu
#Below returns long list of TRUE/FALSE values,
#Note: "IDs" is a column name,
#Wrap with head() to shorten:
df$IDs %in% c("ident_1", "ident_2");
#Below returns index of IDs that are TRUE,
#Wrap with head() to shorten:
which(df$IDs %in% c("ident_1", "ident_2"));
#Below returns short TRUE/FALSE tab
Hi, you can try starting at the link below:
https://stat.ethz.ch/R-manual/R-patched/doc/html/packages.html
Or type any of following commands into your R-Console (for starters):
> library()
> library(help="base")
> library(help="stats")
> library(help="graphics")
> library(help="grDevices")
> lib
Hello Jean-Louis,
Noting the subject line of your post I thought the first answer would
have been encoding histology stages as factors, and "unclass-ing" them
to obtain integers that then can be mathematically manipulated. You
can get a lot of work done with all the commands listed on the
"factor"
Agreed, I meant to add this line (for unclassed factor levels 1-through-8):
> ((1:8 - 1)*(0.25))+1
[1] 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75
Depending on the circumstance, you can also consider using dummy
factors or even "NA" as a level; see the "factor" help page for
details.
Best, Bill.
W.
Do either of the postings/threads below help?
https://r.789695.n4.nabble.com/read-csv-sql-to-select-from-a-large-csv-file-td4650565.html#a4651534
https://r.789695.n4.nabble.com/using-sqldf-s-read-csv-sql-to-read-a-file-with-quot-NA-quot-for-missing-td4642327.html
Otherwise you can try reading thr
Hi Spencer,
I tried the code below on an older R-installation, and it works fine.
Not a full solution, but it's a start:
> library(RCurl)
Loading required package: bitops
> url <-
> "https://s1.sos.mo.gov/CandidatesOnWeb/DisplayCandidatesPlacement.aspx?ElectionCode=750004975";
> M_sos <- getURL(
Dear Spencer Graves (and Rasmus Liland),
I've had some luck just using gsub() to alter the offending ""
characters, appending a "___" tag at each instance of "" (first I
checked the text to make sure it didn't contain any pre-existing
instances of "___"). See the output snippet below:
> library(R
Hello John, Does this help?
https://cran.r-project.org/web/packages/bibliometrix/vignettes/bibliometrix-vignette.html
https://bibliometrix.org/
Best, Bill.
W. Michels, Ph.D.
On Fri, Aug 28, 2020 at 11:04 PM Fraedrich, John wrote:
>
>
>
> To analyze 10,000+ articles within several journals to
89 matches
Mail list logo