Hi,
this is what i got, just with base R:
> a <- download.file("
ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/preprocessing_erp/s04.eeg
","s04.eeg")
probando la URL '
ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/preprocessing_erp/s04.eeg
'
Content type 'unknown' length 142773760
Hi, I need help for cleaning this:
"[2440810] / www.tinyurl.com/hgaco4fha3"
My desired output is:
"[2440810] / tinyurl".
My attemps:
stringa <- "[2440810] / www.tinyurl.com/hgaco4fha3"
b <- sub('^www.', '', stringa) #wanted to get rid of "www." part. Until
first dot.
b <- sub('[.].*', '', b
Hello, I need some help with regex.
I have this to sentences. I need to extract both "49MU6300" and "LE32S5970"
and put them in a new colum "SKU".
A) SMART TV UHD 49'' CURVO 49MU6300
B) SMART TV HD 32'' LE32S5970
DataFrame for testing:
ecommerce <- data.frame(a = c(1,2), producto = c("SMART TV
After, restarting PC I do get the months in spanish. Sorry for the hassle.
2017-06-05 15:01 GMT-05:00 Omar André Gonzáles Díaz
:
> Thank you Duncan and Rui for your time and interest in this issue.
>
> Maybe it is a problem with Windows 7 and Spanish, and not Windows 10.
>
>
UMERIC=C
>>
>> [5] LC_TIME=Portuguese_Portugal.1252
>>
>> attached base packages:
>> [1] stats graphics grDevices utils datasets methods base
>>
>> loaded via a namespace (and not attached):
>> [1] compiler_3.4.0
>>
>> Hope this h
Hi,
I want to reporte some strange behaviour with the "months" function, from
base R.
When using "months" to extract months from a date column, I'm getting the
months in english, when I was expecting months in spanish.
When using "weekdays" to extract days of week from a date column, I'm
getting
use "write.csv("you-df", "name-of-file.csv", row.names = FALSE).
And Google please, as others have suggested.
2016-12-28 21:33 GMT-05:00 Jim Lemon :
> Hi Bryan,
> When I have to do something like this, I usually go through HTML
> output and import it into MS Word. I am not suggesting that this i
I have the following strings:
[1] "PPA 06 - Promo Vasito" [2] "PPA 05 - Cuentos"
[3] "PPA 04 - Promo vasito" [4] "PPA 03 - Promoción escolar"
[5] "PPA - Saluda a tu pediatra" [6] "PPL - Dia del Pediatra"
*Desired result*:
[1] "Promo Vasito" "Cuentos""Pro
Hi, your were close.
This is the solution:
sub("[.]", " ", "c.t")
You need to scape the point, because point has a especial meaning in
regular expressions. Read more on regex...
2016-11-18 19:13 GMT-05:00 John :
> Hi,
>
>Is there any function that replaces a dot with a space? I expect "c t
Hi Steven,
grep uses regex... so you can use this:
-grep("age$",x): it says: match "a", then "g", then "e" and stop. The "$"
menas until here and no more.
> grep("age$",x)
[1] 5
2016-05-04 1:02 GMT-05:00 Jim Lemon :
> Hi Steven,
> If this is just a one-off, you could do this:
>
> grepl("age",
Hi,
I would appreciate your help.
I’m having problems when transforming a column from “factor” to “date”.
It does not convert just: 31/03/2016 correctly, it out puts: NA.
04/04/2016 turns out as: 2016-04-04
02/04/2016 turns out as: 2016-02-04
31/03/2016 turns out as: NA
03/04/2016
Hi,I have a DF with a column with "html", like this:
https://ad.doubleclick.net/ddm/trackimp/N344006.1960500FACEBOOKAD/B9589414.130145906;dc_trk_aid=303019819;dc_trk_cid=69763238;ord=[timestamp];dc_lat=;dc_rdid=;tag_for_child_directed_treatment=?";
BORDER="0" HEIGHT="1" WIDTH="1" ALT="Advertisemen
uente$avg.session.duration,"%H:%M:%S")
> [1] "05:29:38" "00:14:15" "00:48:05" "16:40:00" "01:46:24" "02:03:15"
> [7] "15:37:07" "00:23:30" "20:08:48" "13:50:36"
>
> Jim
>
&g
Hi,
I've a data frame with 3 columns: "mes", "fuente", "avg.sessions.duration".
"avg.sessions.duration" is a column containing seconds.
I need you help with:
1.- Help to put these values in "h:m:s" format.
.
===
I've found this german page:
" .*forum.*|
.*buy.*"
But, the ".*", as far as I understand, means: any character, 0 or more
times. So I should cover the blank and break lines. May you explain this
further, this is not making click on my head.
2015-10-26 7:29 GMT-05:00 S Ellison :
>
>
I'm using some regex in a for loop to check for some values in column
"source", and put a result in column "fuente".
I need some advice on this topics:
-Making the regex parts work, I've tasted them in regexpal.com and they
work, but not in R.
-Making the code (for loop) more efficient, more cle
ucto[i], "[^A-Z0-9-]+")) #
> isolate tokens
>
> if(any(grep("[A-Z][0-9]", v))) {
>
> linio.tv$id[i] <- v[grep("[A-Z][0-9]", v)]
>
> }
>
> else {
> linio.tv$id[i] <- NA
> }
> }
>
23.78, 38.89, 11.77, 31.67, 11.77, 10.35, 22.22, 13.52, 23.82,
9.09, 15.4, 13.75, 18.9, 16.68, 7.86, 13.33, 30.41, 19.44, 6.67,
16.68, 20.19, 10.54, 8.34, 0, 10.01, 17.01, 30.01, 15.27, 16.82,
9.13, 1.08, 19.55, 11.77, 20.01, 12.93, 21.43, 25.9, 3.33, 30,
17.4, 15.4, 9.53, 10.35, 14.1, 10.81,
Thank you very much to both of you. This information is very enlightening
to me.
Cheers.
2015-10-10 1:11 GMT-05:00 Boris Steipe :
> David answered most of this. Just a two short notes inline.
>
>
>
>
> On Oct 10, 2015, at 12:38 AM, Omar André Gonzáles Díaz <
> oma.g
0""55UF7700""65UF7700"
> [36] "55UF8500""TC-55CX640W" "TC-50CX640W" "70UF7700""UG8700"
> [41] "LF6350" "KDL-50FA95C" "KDL50W805C" "KDL-40R354B" "40J5500"
}[A-Z]{1})(.*)", "\\2",
ripley.tv$id, ignore.case = T)
ripley.tv$id <- sub("(.*)([A-Z]{2}[0-9]{2}[A-Z]{1}[0-9]{3}[A-Z]{1})(.*)",
"\\2",
ripley.tv$id, ignore.case = T)
ripley.tv$id <- sub("(.*)([A-Z]{2}[0-
with all types of IDs in product column.
I think it can be achieve with some ifelse construction I think... that's
why my initial lines of code.
Any hint is welcome.
2015-10-09 15:37 GMT-05:00 David Winsemius :
>
> On Oct 9, 2015, at 12:59 PM, Omar André Gonzáles Díaz wrote:
>
I need to extract an ID from the product column of my df.
I was able to extract the ids for some scenearios, but when applying my
code for the next type of ids (there are some other combinations), the
results of my first line of code got NAs.
ripley.tv$id <- sub("(.*)( [0-9]{2}[a-z]{1}[0-9]{4})(
Yes, you are right. Thank you.
2015-10-08 20:07 GMT-05:00 David Winsemius :
>
> On Oct 8, 2015, at 4:50 PM, Omar André Gonzáles Díaz wrote:
>
> > David, it does work but not in all cases:
>
> It should work if you change the "+" to "*" in the last cap
Hi I have a vector of 100 elementos like this ones:
a <- c("SMART TV LCD FHD 70\" LC70LE660", "LED FULL HD 58'' LE58D3140")
I want to put just the (70\") and (58'') in a vector b.
This is my try, but is not working:
b <- grepl('^[0-9]{2}""$',a)
Any hint is welcome, thanks.
[[alternati
/. 1,699.00", "S/. 2,299.00S/. 1,699.00", "S/.
8,999.00S/. 7,299.00",
"S9000S/. 10,999.00S/. 8,999.00", "S9000S/. 14,999.00S/. 12,999.00",
"S/. 6,999.00S/. 5,999.00", "S/. 2,799.00S/. 2,299.00", "S/.
2,999.00S/. 2,649.00"
Hi R users, I have a character vector with 2 numbers: old price, new
price. The problem is that some rows (4,23, for example) contain a
little description of the product, which I don't need.
I've tried a lot of thins, like this one:
TV_Precios3 <- gsub("^S ^[0-9]{2}\\$","",TV_Precios2)
Without r
You could use Rmarkdown, and use as output: html.
http://rmarkdown.rstudio.com/html_document_format.html
2015-09-19 15:46 GMT-05:00 Frank Schwidom :
> Hi,
>
> when you can plot this graph using the rgl-package,
> then you can use "rgl::writeWebGL" to create an 3D-View
> in the Browser.
>
> Regard
Hi all,
I'm learning about how to do clusters of clients. Ç
I've founde this nice presentation on the subject, but the data is not
avaliable to use. I've contacted the autor, hope he'll answer soon.
https://ds4ci.files.wordpress.com/2013/09/user08_jimp_custseg_revnov08.pdf
Someone knows similar
Hi Community,
I'm using custom CSS to modify my html_document, genereted using knirt.
According to this page:
http://rmarkdown.rstudio.com/html_document_format.html
I have to turn off: 1) theme: null and 2) highlight: null. And use:
3) css: my_styles.css (my css document).
I've achieved to us
Hi, R-Help members,
I'm doing some webscraping. This time i need the image (url) of the
products of an ecommerce.
I can get the nodes where the urls are, but when trying to extract the URL,
i need to take 1
additional step:
"src" vs "data-original": in the source code, some urls are in the "src"
You need to save your pollutantmean in your wroking directory.
Then use the submit function they give in the homeworks page. Download it,
or use the source() function.
Then in the console, use "submit()"... thes functions needs your
identification as student... then it prints notes to continue wi
Two seconds using google:
http://stackoverflow.com/questions/9002227/how-to-get-the-name-of-a-data-frame-within-a-list
2015-05-15 7:20 GMT-05:00 Jim Lemon :
> Hi Kai,
> One way is to name the components of your list with the names of the
> data frames:
>
> df1<-data.frame(a=1:3)
> df2<-data.fram
Is this file in your working directory? (To know your working directory
use: getwd() )
If not, put it in there.
2015-03-02 11:53 GMT-05:00 Mello Cavallo, Alice :
> I copied the file into the bin folder of R ...
>
>
>
> > perf_data <- read.csv("PerfResultsCSv.csv")
> Error in file(file, "rt") :
The best way is to save the file as CSV... after you can simply import it
with this comand in R:
read.csv(...) ... to know more about the read.csv comand use in R this:
?read.csv.
There are other packages to import EXCEL FILES, but the simplest way, its
importing this as CSV.
2014-09-09 18:03 GM
Hi all,
please, i'm trying to understand how using "Gsub" for some search and
replace of text, makes my data frame lost it's format.
This is my code:
DataGoogle1 <- read.csv(file = "DataGoogle2.csv", header = T,
stringsAsFactors = F)
head(DataGoogle1)
Result 1:
CampañaV
Hi all, i have this data.frame with 3 columns: "campaña", "visitas",
"compras".
This is the actual data.frame:
Campaña Visitas Compras
1 facebook-Ads1 524 2
2 faceBOOK-Ads1 487 24
3 fcebook-ads12258 4
4
Hi all,
I want to know, where i can find a package to simulate the functions
"Search and Replace and "Find Words that contain - replace them with...",
that we can use in EXCEL.
I've look in other places and they say: "Reshape2" by Hadley Wickham. How
ever, i've investigated it and its not exactl
38 matches
Mail list logo