Leaving data in the wrong encoding is leaving a bug around waiting to
surface. Is the data correctly encoded as Latin1 (codepage 8859-1),
Windows 8 bit (codepage 1252, also sometimes referred to as Latin1) or some
Unicode encoding (likely UTF-8)?
Character mapping is not such an issue for mapping
In my experience this NOTE does not interfere with CRAN submission and you
can ignore it.
Hadley
On Monday, September 19, 2022, Igor L wrote:
> Hello everybody,
>
> I'm testing my package with the devtools::check() function and I got a
> warning about found non-ASCII strings.
>
> These characte
This happened to me this summer when working on the recent US census; came
up with two possible solutions:
1. Re-encode the column to UTF-8. Example:
Encoding(puertoricocounty20$NAME) <- "latin1"
puertoricocounty20$NAME <- iconv(puertoricocounty20$NAME, "latin1", "UTF-8")
2. Use gsub to replace
Hello everybody,
I'm testing my package with the devtools::check() function and I got a
warning about found non-ASCII strings.
These characters are in a dataframe and, as they are names of institutions
used to filter databases, it makes no sense to translate them.
Is there any way to make the ch