On Fri, Oct 15, 2010 at 2:47 AM, Michael Bedward
wrote:
> Hi Rainer,
>
> Great - many thanks for that. Yes, I'm using OSX
>
> I initially tried to use install.packages to get get a pre-built
> binary of earthmovdist from Rforge, but it failed with...
>
> In getDependencies(pkgs, dependencies, ava
I'm still getting familiar with lapply
I have this date sequence
x <- seq(as.Date("01-Jan-2010",format="%d-%b-%Y"), Sys.Date(), by=1) #to
generate series of dates
I want to apply the function for all values of x . so I use lapply (Still a
newbie!)
I wrote this test function
pFun <- function (x) {
Hi,
You might look at Reduce(). It seems faster. I converted the matrix
to a list in an incredibly sloppy way (which you should not emulate)
because I cannot think of the simple way.
> probs <- t(matrix(rep(1:1000), nrow=10)) # matrix with row-wise
> probabilites
> F <- matrix(0, nrow=nro
AIC is only defined up to an additive constant (as is log-likelihood).
It should not surprise you that the values for AIC differ between packages.
The real question is whether the change in AIC when going form one model to
anoth is the same. If not, one is wrong (at least).
-Original Mess
Hi list,
I have a 1710x244 matrix of numerical values and I would like to calculate the
mean of every group of three consecutive values per column to obtain a new
matrix of 570x244. I could get it done using a for loop but how can I do that
using apply functions?
In addition to this, do I hav
On 15.10.2010 09:19, Santosh Srinivas wrote:
I'm still getting familiar with lapply
I have this date sequence
x<- seq(as.Date("01-Jan-2010",format="%d-%b-%Y"), Sys.Date(), by=1) #to
generate series of dates
I want to apply the function for all values of x . so I use lapply (Still a
newbie!)
I
Hi:
Look into the rollmean() function in package zoo.
HTH,
Dennis
On Fri, Oct 15, 2010 at 12:34 AM, David A. wrote:
>
> Hi list,
>
> I have a 1710x244 matrix of numerical values and I would like to calculate
> the mean of every group of three consecutive values per column to obtain a
> new mat
one efficient way to do this, avoiding loops, is using the rowsum()
function, e.g.,
mat <- matrix(rnorm(1710*244), 1710, 244)
id <- rep(seq_len(570), each = 3)
means <- rowsum(mat, id, FALSE) / 3
Regarding the second part of your question, indeed a "gold" rule of
efficient R programming when
Hello Dennis,
That's a very good suggestion. I've attached a template here as a .png file,
I hope you can view it. This is what I've managed to achieve in S-Plus (we
use S-Plus at work but I also use R because there's some very good R
packages for PK data that I want to take advantage of that is n
Dear Sir/Madam;
I'm not sure whether this is the correct contact for help.
I've been recently working with R on my project, unfortunately It sudenly
crashes!
It gives me the following message:
"FATAL ERROR: unable to restore saved data in .RDATA"
I decided to uninstall the copy (a R2.11.0) and ins
Thx.
-Original Message-
From: Uwe Ligges [mailto:lig...@statistik.tu-dortmund.de]
Sent: 15 October 2010 13:11
To: Santosh Srinivas
Cc: r-help@r-project.org
Subject: Re: [R] Downloading file with lapply
On 15.10.2010 09:19, Santosh Srinivas wrote:
> I'm still getting familiar with lappl
orduek wrote:
>
> Is there an option to import .sta files to R?
> I know you can import SPSS ones.
> thank you.
>
Hi:
I have the same problem.
Did you manage in the end to import .sta files to R?
thank you
--
View this message in context:
http://r.789695.n4.nabble.com/import-statsoft-statist
Hi
I want to set a variable to either 1 or 0 depending on an value of a
dataframe and then add this as a colum to the dataframe.
This could be done with a loop but as we are able to do questions on a
complete row or colum without a loop it would be sweet if it could be done.
for example:
table
On Fri, Oct 15, 2010 at 12:23 AM, Joshua Wiley wrote:
>
> Hi,
>
> You might look at Reduce(). It seems faster. I converted the matrix
> to a list in an incredibly sloppy way (which you should not emulate)
> because I cannot think of the simple way.
Dennis provided the answer: system.time(add(
Dear colleagues,
I would like to ask you how to estimate
biweight M-estimator of Tukey with known
scale example.
I know how to estimate biweight M-estimator
if estimated scale is used using function
rml in package MASS.
library(MASS)
x<-rnorm(1000)
rlm(x~1,psi="psi.bisquare")
But I would like t
Dear Joel,
On Fri, Oct 15, 2010 at 1:16 AM, Joel wrote:
>
> Hi
>
> I want to set a variable to either 1 or 0 depending on an value of a
> dataframe and then add this as a colum to the dataframe.
>
> This could be done with a loop but as we are able to do questions on a
> complete row or colum wit
try this:
table$VoteRight <- as.numeric(table$age > 18)
Best,
Dimitris
On 10/15/2010 10:16 AM, Joel wrote:
Hi
I want to set a variable to either 1 or 0 depending on an value of a
dataframe and then add this as a colum to the dataframe.
This could be done with a loop but as we are able to
Indeed I was close :)
Thx for the fast respond!
Have a good day
//Joel
--
View this message in context:
http://r.789695.n4.nabble.com/Set-value-if-else-tp2996667p2996682.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-proj
On Oct 15, 2010, at 3:46 AM, Anh Nguyen wrote:
Hello Dennis,
That's a very good suggestion. I've attached a template here as
a .png file,
I hope you can view it. This is what I've managed to achieve in S-
Plus (we
use S-Plus at work but I also use R because there's some very good R
package
x <- data.frame(x=1:10)
require(gtools)
x$y <- ifelse(odd(x$x),0,1)
HTH
R.
Dr. Rubén Roa-Ureta
AZTI - Tecnalia / Marine Research Unit
Txatxarramendi Ugartea z/g
48395 Sukarrieta (Bizkaia)
SPAIN
> -Mensaj
Hi,
For this example:
O <- c(0 0 0 2 0 0 2 0)
I want to create an array every time O[i] > 0. The array should be in the
form;
R[j] <- array(-1, dim=c(2,O[i]))
i.e. if O[i] > 0 4 times I want 4 R arrays.
Does anyone have any suggestions?
Thanks,
Doug
--
View this message in context:
htt
Hi Dennis and Dimitris,
thanks for your answers. I am trying the rollmean() function and also the
rollapply() function because I also want to calculate CV for the values.
For this I created a co.var() function. I am having problems using them.
>co.var<-function(x)(
+sd(x)/mean(x)
+)
> dim(myd
On Fri, Oct 15, 2010 at 9:55 AM, dpender wrote:
>
> Hi,
>
> For this example:
>
> O <- c(0 0 0 2 0 0 2 0)
>
> I want to create an array every time O[i] > 0. The array should be in the
> form;
>
> R[j] <- array(-1, dim=c(2,O[i]))
>
> i.e. if O[i] > 0 4 times I want 4 R arrays.
>
> Does anyone have
Hi, Doug,
maybe
columns <- c( 0, 3, 0, 2, 0, 1)
lapply( columns[ columns > 0],
function( o) array( -1, dim = c( 2, o)))
does what you want?
Regards -- Gerrit
-
AOR Dr. Gerrit Eichner Mathematical Institu
Dear List,
I am doing some simulation in R and need basic help!
I have a list of animal families for which i know the number of species in each
family.
I am working under the assumption that a species has a 7.48% chance of being at
risk.
I want to simulate the number of species expected to
Have a look at the package smoothmest.
Christian
On Fri, 15 Oct 2010, Ondrej Vozar wrote:
Dear colleagues,
I would like to ask you how to estimate
biweight M-estimator of Tukey with known
scale example.
I know how to estimate biweight M-estimator
if estimated scale is used using function
rml
hello,
i was shortly asking the list for help with some interaction contrasts (see
below) for which
i had to change the reference level of the model "on the fly" (i read a post
that this is possible in
multcomp).
if someone has a clue how this is coded in multcomp; glht() - please point
me ther
Hi:
To get the plots precisely as you have given them in your png file, you're
most likely going to have to use base graphics, especially if you want a
separate legend in each panel. Packages ggplot2 and lattice have more
structured ways of constructing such graphs, so you give up a bit of freedom
Barry, Gerrit,
That was what I am after but unfortunately only the starting point. I am
now trying to amend a function that inserts the R matrices into a dataset in
the correct places:
i.e.
H <- c(0.88, 0.72, 0.89, 0.93, 1.23, 0.86, 0.98, 0.85, 1.23)
T <- c(7.14, 7.14, 7.49, 8.14, 7.14, 7.32,
I've rolled up R-2.12.0.tar.gz a short while ago. This is a development
release which contains a number of new features.
Also, a number of mostly minor bugs have been fixed. See the full list
of changes below.
You can get it from
http://cran.r-project.org/src/base/R-2/R-2.12.0.tar.gz
or wait fo
Hi
r-help-boun...@r-project.org napsal dne 14.10.2010 10:34:12:
>
> Thanks Dennis.
>
>
>
> One more thing if you don't mind. How to I abstract the individual H
and T
> “arrays” from f(m,o,l) so as I can combine them with a date/time array
and
> write to a file?
>
Try to look at ?merge fu
Thanks for the advice Gabor,
I was indeed not starting and finishing with sqldf(). Which was why it was
not working for me. Please forgive a blatantly obvious mistake.
I have tried what U suggested and unfortunately R is still having problems
doing the join. The problem seems to be one of memory
Hi:
I don't believe you've provided quite enough information just yet...
On Fri, Oct 15, 2010 at 2:22 AM, John Haart wrote:
> Dear List,
>
> I am doing some simulation in R and need basic help!
>
> I have a list of animal families for which i know the number of species in
> each family.
>
> I a
On Fri, Oct 15, 2010 at 09:57:21AM +0200, Muteba Mwamba, John wrote:
> "FATAL ERROR: unable to restore saved data in .RDATA"
Without more information it's hard to know what exactly went wrong.
Anyway, the message most likely means that the .RData file got
corrupted. Deleting it should solve the
On Oct 15, 2010, at 12:37 , Philipp Pagel wrote:
> On Fri, Oct 15, 2010 at 09:57:21AM +0200, Muteba Mwamba, John wrote:
>
>> "FATAL ERROR: unable to restore saved data in .RDATA"
>
> Without more information it's hard to know what exactly went wrong.
>
> Anyway, the message most likely means t
Sometimes such message appears when you try to open .RData file in
environment where packages used when the file was created are not
installed. Than it is possible just to install necessary packages. Without
whole story it is impossible to say what is real cause for that error.
Regards
Petr
r-
OK, my last question didn't get any replies so I am going to try and ask a
different way.
When I generate contrasts with contr.sum() for a 3 level categorical variable I
get the 2 orthogonal contrasts:
> contr.sum( c(1,2,3) )
[,1] [,2]
110
201
3 -1 -1
This provides the
Hi Denis and list
Thanks for this , and sorry for not providing enough information
First let me put the study into a bit more context : -
I know the number of species at risk in each family, what i am asking is "Is
risk random according to family or do certain families have a disproportionate
Hi John,
The word "species" attracted my attention :)
Like Dennis, I'm not sure I understand your idea properly. In
particular, I don't see what you need the simulation for.
If family F has Fn species, your random expectation is that p * Fn of
them will be at risk (p = 0.0748). The variance on t
Hi, Doug,
maybe
HH <- c(0.88, 0.72, 0.89, 0.93, 1.23, 0.86, 0.98, 0.85, 1.23)
TT <- c(7.14, 7.14, 7.49, 8.14, 7.14, 7.32, 7.14, 7.14, 7.14)
columnnumbers <- c(0, 0, 0, 3, 0, 0, 0, 2, 0)
TMP <- lapply( seq( columnnumbers),
function( i, CN, M) {
if( CN[i] == 0) as.
Hi Michael,
Thanks for this - the reason i am following this approach is that it appeared
in a paper i was reading, and i thought it was a interesting angle to take
The paper is
Vamosi & Wilson, 2008. Nonrandom extinction leads to elevated loss of
angiosperm evolutionary history. Ecology Let
Looking at the source for nlrob, it looks like it saves the coefficients
from the results of running an nls and then passes those coefficients back
into the next nls request. The issue that it's running into is that nls
returns the coefficients as upper, LOGEC501, LOGEC502, and LOGEC503, rather
tha
Hi,
I a trying to compute scores for a new observation based on previously
computed PCA by PCAgrid() function in the pcaPP package. My data has
more variables than observations.
Here is an imaginary data set to show the case:
> n.samples<-30
> n.bins<-1000
> x.sim<-rep(0,n.bins)
> V.sim<-diag(n.bi
On Fri, Oct 15, 2010 at 6:14 AM, Chris Howden
wrote:
> Thanks for the advice Gabor,
>
> I was indeed not starting and finishing with sqldf(). Which was why it was
> not working for me. Please forgive a blatantly obvious mistake.
>
>
> I have tried what U suggested and unfortunately R is still havi
Hi John,
I haven't read that particular paper but in answer to your question...
> So if i do this for all the families it will be the same as doing the
> simulation experiment
> outline in the method above?
Yes :)
Michael
On 15 October 2010 23:18, John Haart wrote:
> Hi Michael,
>
> Thanks
Although I know there is another message in this thread I am replying
to this message to be able to include the whole discussion to this
point.
Gregor mentioned the possibility of extending the compiled code for
cumsum so that it would handle the matrix case. The work by Dirk
Eddelbuettel and Rom
Dear all
I have data like this:
x y
[1,] 59.74889 3.1317081
[2,] 38.77629 1.7102589
[3,] NA 2.2312962
[4,] 32.35268 1.3889621
[5,] 74.01394 1.5361227
[6,] 34.82584 1.1665412
[7,] 42.72262 2.7870875
[8,] 70.54999 3.3917257
[9,] 59.37573 2.67632
G'day Michael,
On Fri, 15 Oct 2010 12:09:07 +0100
Michael Hopkins wrote:
> OK, my last question didn't get any replies so I am going to try and
> ask a different way.
>
> When I generate contrasts with contr.sum() for a 3 level categorical
> variable I get the 2 orthogonal contrasts:
>
> > con
you can do the following:
mat <- cbind(x = runif(15, 50, 70), y = rnorm(15, 2))
mat[sample(15, 2), "x"] <- NA
na.x <- is.na(mat[, 1])
mat[na.x, ]
mat[!na.x, ]
I hope it helps.
Best,
Dimitris
On 10/15/2010 2:45 PM, Jumlong Vongprasert wrote:
Dear all
I have data like this:
x
Try this:
> a <- read.table(textConnection(" x y
+ 59.74889 3.1317081
+ 38.77629 1.7102589
+NA 2.2312962
+ 32.35268 1.3889621
+ 74.01394 1.5361227
+ 34.82584 1.1665412
+ 42.72262 2.7870875
+ 70.54999 3.3917257
+ 59.37573 2.6763249
+ 68.87422 1.96977
Hi,
I have performed a clustering of a matrix and plotted the result with
pltree. See code below. I want to color the labels of the leafs
individually. For example I want the label name "Node 2" to be plotted in
red. How do I do this?
Sincerely
Henrik
library(cluster)
D <- matrix(nr=4,
Hi
r-help-boun...@r-project.org napsal dne 15.10.2010 15:00:46:
> you can do the following:
>
> mat <- cbind(x = runif(15, 50, 70), y = rnorm(15, 2))
> mat[sample(15, 2), "x"] <- NA
>
> na.x <- is.na(mat[, 1])
> mat[na.x, ]
> mat[!na.x, ]
Or if you have missing data in several columns and you
Hi,
This would probably deserve some abstraction, we had C++ versions of
apply in our TODO list for some time, but here is a shot:
require( Rcpp )
require( inline )
f.Rcpp <- cxxfunction( signature( x = "matrix" ), '
NumericMatrix input( x ) ;
NumericMatrix output = clone( input )
..by some (extensive) trial and error reordering the contrast matrix and the
reference level
i figured it out myself -
for anyone who might find this helpful searching for a similar contrast in
the future:
this should be the right one:
c2<-rbind("fac2-effect in A"=c(0,1,0,0,0,0,0,0),
Hi!
I am a new R user and have no clue of this error (see below) while
using edgeR package:
> Y <- clade_reads
> y <- Y[,c(g1,g2)]
> grouping <- c( rep(1,length(g1)), rep(2,length(g2)) )
> size <- apply(y, 2, sum)
> d <- DGEList(data = y, group = grouping, lib.size = size)
Error in DGEList(da
Hello again John,
I was going to suggest that you just use qbinom to generate the
expected number of extinctions. For example, for the family with 80
spp the central 95% expectation is:
qbinom(c(0.025, 0.975), 80, 0.0748)
which gives 2 - 11 spp.
If you wanted to do look across a large number of
On 10/15/2010 06:17 AM, Ying Ye wrote:
> Hi!
>
> I am a new R user and have no clue of this error (see below) while using
> edgeR package:
edgeR is a Bioconductor pacakge so please subscribe to the Bioconductor
list and ask there.
http://bioconductor.org/help/mailing-list/
include the output of
Dear List,
I each iteration of a simulation study, I would like to save the p-value
generated by "coxph". I fail to see how to adress the p-value. Do I have to
calculate it myself from the Wald Test statistic?
Cheers, Paddy
__
R-help@r-project.org
Henrik, there is an easily adaptable example in this thread:
http://r.789695.n4.nabble.com/coloring-leaves-in-a-hclust-or-dendrogram-plot
-tt795496.html#a795497
HTH. Bryan
*
Bryan Hanson
Professor of Chemistry & Biochemistry
DePauw University, Greencastle IN USA
On 10/15/10 9:05 AM,
Hi Gerrit,
Almost it but I need to insert M[,i] as well as (matrix( -1, nrow( M),
CN[i]) when CN[i] = 0
I know this is not correct but can something like the following be done?
HH <- c(0.88, 0.72, 0.89, 0.93, 1.23, 0.86, 0.98, 0.85, 1.23)
TT <- c(7.14, 7.14, 7.49, 8.14, 7.14, 7.32, 7.14, 7.14,
Dear R-help mailing list and software development team.
After I have used R a few weeks, I was exposed to the best of the program.
In addition, the R-help mailing list a great assist new users.
I do my job as I want and get great support from the R-help mailing list.
Thanks R-help mailing list.
T
Dear R-help mailing list and software development team.
After I have used R a few weeks, I was exposed to the best of the program.
In addition, the R-help mailing list a great assist new users.
I do my job as I want and get great support from the R-help mailing list.
Thanks R-help mailing list.
T
Hi!
I am trying to produce a graph which shows overlap in latitude for a
number of species.
I have a dataframe which looks as follows
species1,species2,species3,species4.
minlat 6147947,612352,627241,6112791
maxlat 7542842,723423,745329,7634921
I wan
I am having hard time properly setting NetBeans to work with JRI libs
(http://rosuda.org/JRI/). Most of the instructions I have found so far
are written for Eclipse or Windows (or both).
I have set java.library.path variable in config: customize:VM
arguments field, by specifying
"-Djava.library.p
Hi,
I'm new to this mailing list so apologies if this is too basic. I have
confocal images 512x512 from which I have extracted x,y positions of the
coordiates of labelled cells exported from ImageJ as a.csv file. I also
have images that define an underlying pattern in the tissue defined as
ar
Dear R-Users,
I have a question concerning extraction of parameter estimates of
variance function from lme fit.
To fit my simulated data, we use varConstPower ( constant plus power
variance function).
fm<-lme(UPDRS~time,data=data.simula,random=~time,method="ML",weights=varConstPower(fixed=li
A) you hijacked another thread.
On Oct 15, 2010, at 9:50 AM, Jonas Josefsson wrote:
Hi!
I am trying to produce a graph which shows overlap in latitude for a
number of species.
I have a dataframe which looks as follows
species1,species2,species3,species4.
minlat
Try this:
coef(fm$modelStruct$varStruct, uncons = FALSE)
On Fri, Oct 15, 2010 at 11:42 AM, Hoai Thu Thai wrote:
> Dear R-Users,
>
> I have a question concerning extraction of parameter estimates of variance
> function from lme fit.
>
> To fit my simulated data, we use varConstPower ( constant pl
Thank you Henrique!! It works.
Thu
Le 15/10/2010 16:53, Henrique Dallazuanna a écrit :
coef(fm$modelStruct$varStruct, uncons = FALSE)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
On Oct 15, 2010, at 9:21 AM, Öhagen Patrik wrote:
Dear List,
I each iteration of a simulation study, I would like to save the p-
value generated by "coxph". I fail to see how to adress the p-value.
Do I have to calculate it myself from the Wald Test statistic?
No. And the most important
And thank YOU for taking the time to express your gratitude. I'm sure
all those who regularly take the time to contribute to the list
appreciate the appreciation.
Andrew Miles
On Oct 15, 2010, at 9:49 AM, Jumlong Vongprasert wrote:
Dear R-help mailing list and software development team.
A
Should you need to do it again, you may want to look at the relevel
function. I suppose that would meet the definition of some versions of
"on the fly" but once I have a model, rerunning with a different
factor leveling is generally pretty painless.
--
David.
On Oct 15, 2010, at 9:09 AM,
Hi,
I've had to do something like that before. It seems to be a "feature" of nls
(in R, but not as I recall in Splus) that it accepts a list with vector
components as 'start' values, but flattens the result values to a single
vector.
I can't spend much time explaining, but here's a fragment of
Also look at the get function, it may be a bit more straight forward (and safer
if there is any risk of someone specifying 'rm(ls())' as a data frame name).
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message
Is there a way to estimate a nominal response model?
To be more specific let's say I want to calibrate:
\pi_{v}(\theta_j)=\frac{e^{\xi_{v}+\lambda_{v}\theta_j}}{\sum_{h=1}^m
e^{\xi_{h}+\lambda_{h}\theta_j}}
Where $\theta_j$ is a the dependent variable and I need to estimate
$\xi_{h}$ and $
Hi Dennis,
The first thing I did with my data was to explore it with 6 graphs
(wet-high, med, and solo-; dry-high, med, and solo-) and gave me very
interesting patterns: seed size in wet treatments is either negatively
correlated (high and medium densities) or flat (solo). But dry treatments
are a
On 15 Oct 2010, at 13:55, Berwin A Turlach wrote:
> G'day Michael,
>
Hi Berwin
Thanks for the reply
> On Fri, 15 Oct 2010 12:09:07 +0100
> Michael Hopkins wrote:
>
>> OK, my last question didn't get any replies so I am going to try and
>> ask a different way.
>>
>> When I generate contrast
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Joshua Wiley
> Sent: Friday, October 15, 2010 12:23 AM
> To: Gregor
> Cc: r-help@r-project.org
> Subject: Re: [R] fast rowCumsums wanted for calculating the cdf
>
> Hi,
>
> You
Hi,
I am relatively new to R but not to graphing, which I used to do in Excel
and a few other environments on the job. I'm going back to school for a PhD
and am teaching myself R beforehand. So I hope this question is not
unacceptably ignorant but I have perused every entry level document I can
f
barnhillec wrote:
>
> I'm trying to graph some simple music psychology data. Columns are musical
> intervals, rows are the initials of the subjects. Numbers are in beats per
> minute (this is the value at which they hear the melodic interval split
> into two streams). So here's my table:
>
>
?matplot
e.g.,
copy your data to the clipboard then
library(psych)
my.data <- read.clipboard()
my.data
Tenth Fifth Third
GG 112 152 168
EC 100 120 140
SQ 160 184NA
SK 120 100 180
matplot(t(my.data),type="b")
Bill
At 10:27 AM -0700 10/15/10, barnhillec wrote:
Dear all, I have following 2 zoo objects. However when I try to merge those 2
objects into one, nothing is coming as intended. Please see below the objects
as well as the merged object:
> dat11
V2 V3 V4 V5
2010-10-15 13:43:54 73.8 73.8 73.8 73.8
2010-10-15 13:44:15 7
I have a program that creates a Png file using Rgooglemap with an extent
(lonmin,lonmax,latmin,latmax)
I also have a contour plot of the same location, same extent, same sized
(height/width) png file.
I'm looking for a way to make the contour semi transparent and overlay it on
the google map ( hyb
Megh wrote:
>
> Dear all, I have following 2 zoo objects. However when I try to merge
> those 2 objects into one, nothing is coming as intended. Please see below
> the objects as well as the merged object:
>
>
>> merge(dat11, dat22)
> V2.dat11 V3.dat11 V4.dat11 V5.dat11
On Fri, 15 Oct 2010, Megh Dal wrote:
Dear all, I have following 2 zoo objects. However when I try to merge those 2
objects into one, nothing is coming as intended. Please see below the objects
as well as the merged object:
dat11
V2 V3 V4 V5
2010-10-15 13:43:54
On Fri, Oct 15, 2010 at 2:20 PM, Megh Dal wrote:
> Dear all, I have following 2 zoo objects. However when I try to merge those 2
> objects into one, nothing is coming as intended. Please see below the objects
> as well as the merged object:
>
>
>> dat11
> V2 V3 V4 V5
>
Hi Gabor, please see the attached files which is in text format. I have opened
them on excel then, used clipboard to load them into R. Still really unclear
what to do.
Also can you please elaborate this term "index = list(1, 2), FUN = function(d,
t) as.POSIXct(paste(d, t))" in your previous fil
Hi:
You need to give a function for rollapply() to apply :)
Here's my toy example:
d <- as.data.frame(matrix(rpois(30, 5), nrow = 10))
library(zoo)
d1 <- zoo(d) # uses row numbers as index
# rolling means of 3 in each subseries (columns)
> rollmean(d1, 3)
V1 V2 V3
2 3.
Hi,
I am using R and tried to normalize the data within each sample group using
RMA. When I tried to import the all the normalized expression data as a
single text file and make a boxplot, it showed discrepancy among the sample
groups. I tried to scale them or re-normalize them again, so that it
Hi David,
More info
Thanks a lot
Christophe
##
library(Hmisc)
library(lattice)
library(fields)
library(gregmisc)
library(quantreg)
> str(sasdata03_a)
'data.frame': 109971 obs. of 6 variables:
$ jaar : Factor w/ 3 levels "2006","2007",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Cat_F
public class my_convolve
{
public static void main(String[] args)
{
}
public static void convolve()
{
System.out.println("Hello");
}
}
library(rJava)
.jinit(classpath="C:/Documents and Settings/GV/workspace/Test/bin",
pa
I tried that too, it doesn't work because of the way I wrote the code.
Listing y as free or not giving it a limit makes the scale go from -0.5 to
0.5, which is useless. This is what my code looks like now (it's S-Plus
code, btw)-- I'll try reading up on lattices in R to see if I can come up
with so
Try this:
split(as.data.frame(DF), is.na(DF$x))
On Fri, Oct 15, 2010 at 9:45 AM, Jumlong Vongprasert wrote:
> Dear all
> I have data like this:
> x y
> [1,] 59.74889 3.1317081
> [2,] 38.77629 1.7102589
> [3,] NA 2.2312962
> [4,] 32.35268 1.3889621
> [5,] 74
Thanks Dennis,
I don't think it was a problem of not feeding in a function for rollapply(),
because I was using mean() and my co.var() function in the FUN argument.
The key part seems to be the transformation that zoo() does to the matrix. If I
do the same transformation to my original matrix,
Hi R users,
I am trying to call openbugs from R. And I got the following error message:
~
model is syntactically correct
expected the collection operator c error pos 8 (error on line 1)
variable ww is not defined
Thank you for the very helpful tips. Just one last question:
- In the lattice method, how can I plot TIME vs OBSconcentration and TIME vs
PREDconcentration in one graph (per individual)? You said "in lattice you
would replace 'smooth' by 'l' in the type = argument of xyplot()" that just
means now t
Hello,
My question is assuming I have cut()'ed my sample and look at the
table() of it, how can I compute probabilities for the bins? Do I have
to parse table's names() to fetch bin endpoints to pass them to
p[distr-name] functions? i really don't want to input arguments to PDF
functions by hand (n
I have compared "dat11" and "x" using str() function, however did not find
drastic difference:
> str(dat11)
‘zoo’ series from 2010-10-15 13:43:54 to 2010-10-15 13:49:51
Data: num [1:7, 1:4] 73.8 73.8 73.8 73.8 73.8 73.8 73.7 73.8 73.8 73.8 ...
- attr(*, "dimnames")=List of 2
..$ : chr [1:7]
On Fri, 15 Oct 2010, Megh Dal wrote:
Hi Gabor, please see the attached files which is in text format. I have
opened them on excel then, used clipboard to load them into R. Still
really unclear what to do.
I've read both files using read.zoo():
R> z1 <- read.zoo("dat1.txt", sep = ",", header
I've read a number of examples on doing a multiple bar plot, but cant seem
to grasp
how they work or how to get my data into the proper form.
I have two variable holding the same factor
The variables were created using a cut command, The following simulates that
A <- 1:100
B <- 1:100
A[30:60]
1 - 100 of 125 matches
Mail list logo