*Hi,* *while I try to analysis my data as the following ,I faced some problem with (heatmap()):*
> dat<-ReadAffy() > dat AffyBatch object size of arrays=1164x1164 features (20 kb) cdf=HG-U133_Plus_2 (54675 affyids) number of samples=10 number of genes=54675 annotation=hgu133plus2 notes= > dat2<-rma(dat) Background correcting Normalizing Calculating Expression > dat.m<-exprs(dat2) *The normalized data can be so large that clustering all the genes (or* *arrays) becomes impossible. Clustering about 23000 genes takes about 1GB of memory, and clustering 54675 genes would consume about more than 4 GBs ofmemory, and would not be feasible on a standard Windows workstation.* *So I tried to sample the data, and this sample* *is then clustered. This should convey approximately the same information asthe clustering of the whole dataset:* > n<-1:nrow(dat.m) > n.s<-sample(n, nrow(dat.m)*0.1) > dat.sample<-dat.m[n.s,] > library(amap) > clust.genes<-hcluster(x=dat.sample, method="pearson", + link="average") > clust.arrays<-hcluster(x=t(dat.sample), method="pearson", + link="average") *The sample size is here 10% of the original dataset.* Ok, then I tried to visualizing the clustering results as a heatmap: > heatcol<-colorRampPalette(c("Green", "Red"))(32) > heatmap(x=as.matrix(dat.m), Rowv=as.dendrogram(clust.genes), + Colv=as.dendrogram(clust.arrays), col=heatcol) *Error in .heatmap(x=as.matrix(dat.m), Rowv=as.dendrogram(clust.genes), :**row dendrogram ordering gave **index of wrong length* *Was that sample of the data make an error with heatmap()??* *Cheers,* *Tahani.* [[alternative HTML version deleted]] ______________________________________________ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.