Dear all,
I have two files, both of similar formats. In column 1 are Latitude values
(real numbers, e.g. -179.25), column 2 has Longitude values (also real numbers)
and in one of the files, column 3 has Population Density values (integers);
there is no column 3 in the other file.
However, the
I think the approach is ok. I'm having difficulties though...!
I've managed to get 'merge' working (using the 'all' function as suggested),
but for some strange reason, the output file produces 12 extra rows! So now the
shorter file isn't the same length as the 'master' file, it's now longer!
Hmm, I'm having a fair few difficulties using 'merge' now. I managed to get it
to work successfully before, but in this case I'm trying to shorten (as oppose
to lengthen as before) a file in relation to a 'master' file.
These are the commands I've been using, followed by the dimensions of the f
Dear all,
I have gridded data at 5' (minutes) resolution, which I intend to coarsen to
0.5 degrees. How would I go about doing this in R? I've had a search online and
haven't found anything obvious, so any help would be gratefully received.
I'm assuming that there will be several 'coarsening'
Unfortunately, when I get to the 'myCuts' line, I receive the following error:
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?
...and I also receive warnings about memory allocation being reached (even
though I've already used memory.limit() to maximise the memo
ed? So it would help if you could supply a reproducible
> example.
>
> On Tue, Jul 29, 2008 at 10:09 AM, Steve Murray <[EMAIL PROTECTED]> wrote:
> >
> > Unfortunately, when I get to the 'myCuts' line, I receive the following
> > error:
> >
>
it would help if you could supply a reproducible
> example.
>
> On Tue, Jul 29, 2008 at 10:09 AM, Steve Murray wrote:
>>
>> Unfortunately, when I get to the 'myCuts' line, I receive the following
>> error:
>>
>> Error: evaluation nested too deeply:
Please find below my command inputs, subsequent outputs and errors that I've
been receiving.
> crops <- read.table("crop2000AD.asc", colClasses = "numeric", na="-")
> str(crops[1:10])
'data.frame': 2160 obs. of 10 variables:
$ V1 : num NA NA NA NA NA NA NA NA NA NA ...
$ V2 : num NA N
Hi Jim,
Thanks for your advice. The problem is that I can't lose any of the data - it's
a global dataset, where the left-most column = 180 degrees west, and the
right-most is 180 degrees east. The top row is the North Pole and the bottom
row is the South Pole.
I've got 512MB RAM on the machin
Ok thanks Jim - I'll give it a go! I'm new to R, so I'm not sure how I'd go
about performing averages in subsets... I'll have a look into it, but any
subsequent pointers would be gratefully received as ever!
I'll also try playing with it in Access, and maybe even Excel 2007 might be
able to do
Dear all,
I have a data frame of 2160 rows and 4320 columns, which I hope to condense to
a smaller dataset by finding averages of 6 by 6 blocks of values (to produce a
data frame of 360 rows by 720 columns).
How would I go about finding the mean of a 6 x 6 block, then find the mean of
the nex
Thanks very much for both your suggestions.
When you refer to doing a 'double for loop', do you mean finding the average
for rowgrp and colgrp within each 6x6 block? If so, how would this be done so
that the whole data frame is covered? It would seem to me that the 'mean'
operation would need
Dear all,
I'm trying to interpolate a dataset to give it twice as many values (I'm giving
the dataset a finer resolution by interpolating from 1 degree to 0.5 degrees)
to match that of a corresponding dataset.
I have the data in both a data frame format (longitude column header values
along t
again,
Steve
> Date: Mon, 1 Sep 2008 18:45:35 -0400
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: r-help@r-project.org
> Subject: Re: [R] Interpolation Problems
>
> On 01/09/2008 6:17 PM, Steve Murray wrote:
>> Dear all,
>>
>> I'm trying to inte
Dear all,
I have a tab-delimited text (.txt) file which I'm trying to read into R. This
file is of column format - there are in fact 3 columns and 259201 rows
(including the column headers). I've been using the following commands, but
receive an error each time which prevents the data from bei
Thanks Prof. Ripley! I knew it would be something simple - I'd missed the "\t"
from the read.table command! I won't be doing that again...!!
Thanks again,
Steve
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
Dear R Users,
I have a data frame of 360 rows by 720 columns (259200 values). For each value
in this grid I am hoping to apply an equation to, to generate a new grid. One
of the parts of the equation (called 'p') relies on reading from a separate
reference table. This is Table 4 at:
http://ww
Thanks to you both for your responses.
3
I think these approaches will *nearly* do the trick, however, the problem is
that the reference/lookup table is based on 'bins' of latitude values, eg.>61,
60-56, 55-51, 50-46 etc. whereas the actual data (in my 720 x 360 data frame)
are not binned, e.g.
Dear all,
A hopefully simple question: how do I round a series of values (held in an
object) to the nearest 5? I've checked out trunc, round, floor and ceiling, but
these appear to be more tailored towards rounding decimal places.
Thanks,
Steve
_
Dear R Users,
I'm trying to use the 'cast' function in the 'reshape' package to convert
column-format data to gridded-format data. A sample of my dataset is as follows:
head(finalframe)
Latitude Longitude Temperature OrigLat p-value Blaney
1 -90-38.75 NA -87.75 17.10167
Dear all,
I'm trying to replace NA values with - in one column of a data frame. I've
tried using is.na and the testdata[testdata$onecolumn==NA] <- approach,
but whilst neither generate errors, neither result in -s appearing - the
NAs remain there!
I'd be grateful for any advice o
Dear R Users,
I have 120 objects stored in R's memory and I want to pass the names of these
many objects to be held as just one single object. The naming convention is
month, year in sequence for all months between January 1986 to December 1995
(e.g. Jan86, Feb86, Mar86... through to Dec95). I
Dear R Users,
I am attempting to write a new netCDF file (using the ncdf) package, using 120
grids I've created and which are held in R's memory.
I am reaching the point where I try to put the data into the newly created
file, but receive the following error:
> put.var.ncdf(evap_file, evap_di
month.abb should do the trick
_
View your other email accounts from your Hotmail inbox. Add them now.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEA
That sounds like a sensible way of dealing with the - values...
...but doesn't solve the more important question of how to perform the
resampling. Are there are functions in R which have been designed to achieve
this? Or is there a standard way of going about this?
Many thanks for any advi
Dear all,
I am attempting to perform what should be a relatively simple calculation on a
number of data frame columns. I am hoping to find the average on a per-row
basis for each of the 50 columns. If on a particular row a 'NA' value is
encountered, then this should be ignored and the mean for
Dear all,
I am attempting to convert 10 NetCDF files into a single NetCDF file, due to
the data input requirements of a model I hope to use. I am using the ncdf
package, version 1.6. The data are global-scale water values, on a monthly
basis for 10 years (ie. 120 months of data in total; at pr
Dear all,
I am attempting to convert 10 NetCDF files into a single NetCDF file, due to
the data input requirements of a model I hope to use. I am using the ncdf
package, version 1.6. The data are global-scale water values, on a monthly
basis for 10 years (ie. 120 months of data in total; at p
Dear all,
I currently have a data frame of dimensions 18556 rows by 19 columns. I want to
convert this into a grid of dimensions 720 rows by 360 columns. The problem in
this case is that not all rows in the initial data frame are complete (there
are gaps).
Therefore I am perhaps looking for a
--
> From: dwinsem...@comcast.net
> To: smurray...@hotmail.com
> Subject: Re: [R] Filling a grid based on existing data
> Date: Wed, 24 Mar 2010 20:50:40 -0400
>
>
> On Mar 24, 2010, at 2:34 PM, Steve Murray wrote:
>
>>
>> Dear all,
>>
>&
Dear all,
I have a data frame of 18556 rows and 19 columns and wish to create a new grid
from these data of dimensions 360 rows and 720 columns.
The existing data frame is set up so that every 38 rows makes up one row of the
new data frame, with 2 NA values at the end of each 'block' that shou
Dear all,
I have a plot which contains 4 data series displayed using solid, dashed and
dotted lines, and also points. How do I use lty and pch together to signify
that the first legend item is a solid line, the second is point data (pch=16),
the third is dashed and the fourth is dotted?
Many
Dear all,
I have a dataset of 1073 rows, the first 15 which look as follows:
> data[1:15,]
date year month day rammday thmmday
1 3/8/1988 1988 3 8 1.43 0.94
2 3/15/1988 1988 3 15 2.86 0.66
3 3/22/1988 1988 3 22 5.06 3.43
4 3/29/1988 1988 3 29
ct.org
> Subject: Re: [R] Summing data based on certain conditions
>
> ?by may also be helpful.
>
> Stephan
>
>
> Steve Murray schrieb:
>> Dear all,
>>
>> I have a dataset of 1073 rows, the first 15 which look as follows:
>>
>>> data[1:15,]
>>
Dear all,
I'm having trouble getting the correct spacing between x-axis labels on a
barplot. This is the command I'm using to generate the plot:
temp <- barplot(precip, beside=TRUE, xaxt="n", las=1, xpd=FALSE, col="grey28",
ylim=c(0, max(precip)))
Here is the structure of temp:
> str(temp)
n
Dear all,
I am using the following code to generate a legend in my plot (consisting of
both bars and points), but end up with boxes around my points:
legend(10, par("usr")[4], c("A", "B", "C", "D"), fill=c(NA,NA, "grey28", NA),
pch=c(16,4,NA,18), col=c("red","blue","grey28","yellow"), lty=FALS
r-project.org
> Subject: Re: [R] Unwanted boxes in legend
>
> On 2010-04-15 11:10, Steve Murray wrote:
>>
>> Dear all,
>>
>> I am using the following code to generate a legend in my plot (consisting of
>> both bars and points), but end up with boxes around
t,
>>>
>>> Steve
>>>
>>>
>>>
>>>> Date: Thu, 15 Apr 2010 12:13:40 -0600
>>>> From: ehl...@ucalgary.ca
>>>> To: smurray...@hotmail.com
>>>> CC: r-help@r-project.org
e: [R] Unwanted boxes in legend
>
> On 2010-04-21 4:35, Peter Ehlers wrote:
>> The 'border' argument was added in 2.1.10.
>
> Egad! Did I really type that?
> I meant 'in R 2.10.0'.
>
> -Peter Ehlers
>
>>
>>
>> On 2010-04-21 1:53, St
Dear all,
I am using abline(lm ...) to insert a linear trendline through a portion of my
data (e.g. dataset[,36:45]). However, I am finding that whilst the trendline is
correctly displayed and representative of the data portion I've chosen, the
line continues to run beyond this data segment an
; From: dwinsem...@comcast.net
> To: smurray...@hotmail.com
> Subject: Re: [R] Trendline for a subset of data
> Date: Fri=2C 9 Oct 2009 09:27:43 -0400
>
>
> On Oct 9=2C 2009=2C at 5:50 AM=2C Steve Murray wrote:
>
>>
>> Dear all=2C
>>
>> I am using abl
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Thanks Mark=2C the reg.line trick seemed to work really well.
David - hopefully the hex-text will have gone now - if not=2C please accept=
my apologies as=2C this is=2C as far as I kn
Dear all,
I am attempting to subset a data frame based on a range of latitude values. I
want to extract the values of 'interception' where latitude ranges between 50
and 60. I am doing this using the following code, yet it doesn't return the
results I expected:
> test <- subset(int1901, Lati
<77eb52c6dd32ba4d87471dcd70c8d70001f11...@na-pa-vbe03.na.tibco.com>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Dear Bill and all=2C
Yep you were right - for some strange reason (I'm not sure how...)=2C the l=
atitude data were o
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Bill=2C
It seems to be 'character' - odd...!
> str(int1901$Latitude)
=A0chr [1:61537] "5.75" "6.25" "6.75" "7.25" "7.75" "8.25" ...
Thanks again=2C
Steve
> What does str(int1901)
Dear all,
I am trying to validate a model by comparing simulated output values against
observed values. I have produced a simple X-y scatter plot with a 1:1 line, so
that the closer the points fall to this line, the better the 'fit' between the
modelled data and the observation data.
I am now
Dear all,
I want to determine if the slopes of the trends I have in my plot are
significantly different from each other (I have 2 time-series trends). What
statistical test is most suitable for this purpose and is it available in the R
base package?
Many thanks,
Steve
Dear R Users,
I have 12 data frames, each of 12 rows and 2 columns.
e.g. FeketeJAN
MEANSUM_
AMAZON 144.4997874 68348.4
NILE 5.4701955 1394.9
CONGO71.3670036 21196.0
MISSISSIPPI 18.9273250 6511.0
AMUR 1.8426874 466.2
PARANA 58.38354
Thanks for all the useful information; use of 'c(...)' did the trick, although
in future I'll try to hold the data in a more user-friendly setup.
I've now got a plot, but have two issues that I can't seem to resolve:
1, The ylab is overlapping the y-axis tick mark values. I've tried using oma
Many thanks once more for helping me to solve this.
Gabor - I wasn't even aware of month.abb, so thanks for bringing this useful
trick to my attention!
Steve
_
[[elided Hotmail spam]]
Dear all,
I have a file which I've converted from NetCDF (.nc) to text (.txt) using
ncdump in Unix (as I had problems using the ncdf package to do this). The first
few rows (as copied and pasted from the Unix console) of the file appear as
follows:
_, _, _, _, _, _, _, _, _, _, _, _, _, _, _
Dear all,
I have seen a response from Duncan Murdoch which comes close to solving this
one, but I can't quite seem to tailor it to fit my needs!
I am trying to make just the title in my legend as bold font, with the legend
'items' in normal typeface.
I've tried setting par(font=2) external to
Dear R Users,
I have a data frame of 4 columns and ~58000 rows, the top of which looks like
this:
> head(max_out)
Latitude Longitude Model Obs
1-0.25-49.25 4 4
2-0.25-50.25 4 5
3-0.25-50.75 4 4
4-0.25-51.25 311
5-0.25-5
Dear R Users,
I am trying to plot a barchart with a line graph superimposed (using
par(new=TRUE)). There are 12 bars and 12 corresponding points for the line
graph. This is fine, except that I'm encountering two problems:
1) The position of the points (of the line graph) are not centred on the
Thanks for the replies.
This is a simple example which demonstrates exactly the problems I'm facing. As
you can see, neither the x or y axes line up consistantly.
> barplot(1:12, names.arg=substr(month.abb, 1,1))
> par(new=TRUE)
> plot(1:12, 1:12, type="b", lwd=2, col="red", xaxt="n")
I'd be
Dear all,
I've been trying to superscript the '2' in the following command (I don't want
the '^' displayed), but as yet haven't had much luck. I've tried both the paste
and expression commands, but neither have brought me any joy!
mtext(side=2, line=5.5, "Monthly Precipitation (mm x 10^2/month
Dear R Users,
I'm finding that when I execute the following bit of code, that the new line
argument is actually displayed as text in the graphics device. How do I avoid
this happening?
mtext(side=2, line=5.5, expression(paste("Monthly Summed Runoff (mm/month)",
"/n", "and Summed Monthly Preci
Thanks for the response, however, whilst this eliminates the 'new line'
character from appearing, it doesn't actually cause a new line to be printed!
Instead, I have the last few characters of the first character string
overlapping with the first few characters of the next.
How best should I c
Thanks again for a very useful comment. That seems to have separated the text
and put it onto separate lines.
However, whilst this results in the text being centralised in relation to the
axis, it means that the lower line is left-justified in relation to the upper
line, rather than being cent
Dear all,
I am trying to read in and assign data from 50 tables in an automated fashion.
I have the following code, which I created with the help of textbooks and the
internet, but it only seems to read in the final data file over and over again.
For example, when I type:> table_1951 I get th
Dear all,
I am trying to manually re-sort rows in a number of tables. The rows aren't
sorted on any particular values but are simply ordered by user choice (as shown
by the row numbers in the code). I have been able to carry out each
re-arrangement without the use of the 'for' loop, but cannot
Thanks all - I'm fairly new to R, so I was oblivious to the pros and cons of
using a data frame as opposed to a list! The 'get' command also seemed to work
successfully.
Thanks again,
Steve
_
25GB of FREE Online Storage – Find
Dear all,
I'm trying to assign a name to the fourth column whilst using 'assign', but
keep encountering errors. What have I done wrong?!
> assign(colnames(c(paste("arunoff_",table_year, sep="")[4]), "COUNT"))
Error in if (do.NULL) NULL else if (nc> 0) paste(prefix, seq_len(nc), :
argument
Dear all,
I'm trying to read in a whole directory of files which have two variable parts
to the file name: year and month. E.g. comp198604.asc represents April of 1986
- 'comp' is fixed in each case. Years range between 1986 to 1995 and months are
between 1 and 12.
Just to be clear, there are
Dear all,
Thanks for the help in the previous posts. I've considered each one and have
nearly managed to get it working. The structure of the filelist being produced
is correct, except for a single space which I can't seem to eradicate! This is
my amended code, followed by the first twelve row
Thanks, that's great - just what I was looking for.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-conta
Dear all,
I think I'm nearly there in writing R code which will read in files with two
variable parts to the file name and then assigning these file names to objects,
which also have two variable parts. I have got the code running without
encountering errors, however, I receive 50+ of the same
Dear all,
I've been trying to implement the advice given to me, but without much success
so far. I thought I'd provide the code in full in the hope that it might make
more sense. Just to reiterate, I'm attempting to change the header of the 4th
column of every table to "COUNT".
year<- 1951:2
Jim and all,
Thanks - I managed to get it working based on your helpful advice.
I'm now trying to do something very similar which simply involves changing the
names of the variables in column 1 to make them more succinct. I'm trying to do
this via the 'levels' command as I figured that I might
Dear all,
Apologies for yet another question (!). Hopefully it won't be too tricky to
solve. I am attempting to add row and column names (these are in fact numbers)
to each of the tables created by the code (120 in total).
# Create index of file names
files <- print(ls()[1:120], quote=FALSE)
Dear Peter, Jim and all,
Thanks for the information regarding how to structure 'assign' commands. I've
had a go at doing this, based on your advice, and although I feel I'm a lot
closer now, I can't quite get it to work:
rnames <- sprintf("%.2f", seq(from = -89.75, to = 89.75, length = 360))
c
Dear all,
I am attempting to add row and column names to a series of tables (120 in
total) which have 2 variable parts to their name. These tables are created as
follows:
# Create table indexes
index <- expand.grid(year = sprintf("%04d", seq(1986, 1995)), month =
sprintf("%02d", 1:12))
#
Dear R Users,
I'm trying to use the reshape package to 'melt' my gridded data into column
format. I've done this before on individual files, but this time I'm trying to
do it on a directory of files (with variable file names) - therefore I have to
also use the 'assign' command. I have come up
Dear R Users,
I have 120 data frames of the format table_198601, table_198602...
table_198612, table_198701, table_198702... table_198712 through to
table_199512 (ie. the first 4 digits are years which vary from 1986 to 1995,
and the final two digits are months which vary from 01 to 12 for eac
Dear R Users,
I am using the reshape package to reformat gridded data into column format
using the code shown below. However, when I display the resulting object, a
single column is fomed (instead of three) and all the latitude values (which
should be in either column one or two) are collected
Genius! Thanks very much Hadley - that was surprisingly easier to solve than I
was anticipating!
As a way of offering something back in return, I don't know if you plan to
release a new version of the reshape package, but here's a suggestion to
consider, just in case you do.
On the basis of w
Dear all,
I have 2 data frames, both with 14 columns of data and differing numbers of
rows. The first two columns are 'Latitude' and 'Longitude'. I want to find the
pairs of Latitude and Longitude coordinates which are common to both datasets,
and output a new data frame which is composed of
Dear Steve,
>
> Try
>
> ? intersect
>
> and see if that might help.
>
> Cheers,
> Umesh
>
> On Tue, Apr 28, 2009 at 1:29 PM, Steve Murray> wrote:
>
>
>
> Dear all,
>
>
>
> I have 2 data frames, both with 14 columns of data and differing number
Dear all,
I have a data frame of three columns, which I have sorted by Latitude as
follows:
> test2[60:80,]
Latitude Longitude Sim_1986
6194885.25-29.25 2.175345
6195785.25-28.75 8.750486
6196785.25-28.25 33.569305
6197785.25-27.75 23.702572
6198885.
Many thanks for the very useful responses in such a short time. I'm not a
former SAS user - more a naive R user who didn't realise that a sort wasn't
necessary!
Jim, your solution worked really well - thanks.
Thanks again for the great solutions.
Steve
___
Dear all,
I am trying to use 'merge' within a loop, however, I receive an error relating
to the 'by' argument of the command, as follows:
> merge_year <- 1986
>
> for (i in 1:10) { # Number of file pairs
+ assign(paste("merged_arunfek_", merge_year, sep=""),
merge(x=paste("arunoff_",start
Dear R Users,
I have a data frame of the nature:
> head(aggregate_1986)
Latitude Mean Annual Simulated Runoff per 1° Latitudinal Band
1 -55574.09287
2 -54247.23078
3 -53
Dear R Users,
I am executing the following command to produce a line graph:
matplot(aggregate_1986[,1], aggregate_1986[,2:3], type="l", col=2:3)
On the x-axis I have values of Latitude (in column 1) ranging from -60 to +80
(left to right on the x-axis). However, I wish to have these values sho
Dear all,
I'm attempting to insert a legend into a line graph. I've sorted out the
positioning, but I'm unable to display the sample line and associated colour to
go within the legend box. Instead, under the variable names, the numbers 1, 2,
2, 3 are displayed in a column (with '2' repeated tw
Dear all,
I have a matrix called combine86 which looks as follows:
> combine86
Sim Mean Obs Mean Sim Sum Obs Sum
AMAZON 1172.0424 1394.44604 553204 659573
NILE 262.4440 164.23921 67973 41881
CONGO682.8007 722.63971 205523 214624
MISSISSIPPI 363.
Thanks for the reply - the 'beside' argument certainly looks useful, although
I'm still not getting the output I'd hoped for.
By doing: barplot(combine86[,1:2], beside = TRUE, las = 1,
xlab=rownames(combine86))
...I get all the bars for the 'Sim Mean' column plotted on the left side of the
gra
Jim and all,
Thanks for the suggestion, however, I get the following error:
> barplot(t(combine86[,1:2], beside = TRUE, las = 1))
Error in t(combine86[, 1:2], beside = TRUE, las = 1) :
unused argument(s) (beside = TRUE, las = 1)
I've looked up ?t and cannot see any extra arguments that I sho
Dear all,
I have produced a barplot and wish to alter the axes a little. In place of the
variable names appearing on the x-axis, I'd like to have the numbers 1986 to
1995. I have tried using the argument xlim=c(1986,1995) in the barplot command
but receive: "Error in plot.window(xlim, ylim, lo
A very reasonable request! Sorry for not doing this initially, but please find
below the data I am trying to plot:
> total_sums
sums86 sums87 sums88 sums89 sums90 sums91 sums92
Sim_1986 17722203 16875889 18626582 18428415 17611182 17290016 16819289
X198615276602 140862
> dput(total_sums)
structure(c(17722202.6898231, 15276602.215475, 16875888.5155229,
14086271.625756, 18626581.9628846, 15387747.481166, 18428414.8535184,
15560882.404998, 17611181.5207881, 14905453.195546, 17290016.3934661,
14939493.120707, 16819288.8227961, 13979000.614402, 17657959.3656573,
Thanks Jim, that's great. Based on the information in the previous messages, is
it possible to change the y-axis as I'd hoped?
Thanks again,
Steve
_
[[elided Hotmail spam]]
__
R-help@r-
Dear R Users,
I'm able to display a legend using the following code:
> legend("topright", c("Simulation", "Observation"), fill=2:3, bty="n")
However, this causes the legend to be positioned too close to the bars in my
barplot. I'd like to move the legend up slightly. I have been trying to
det
Dear all,
I have 30 arrays, each with dimensions 720,360,12. The naming format for each
of these 30 objects is: mrunoff_5221, mrunoff_5222... mrunoff_5250.
For example:
> str(mrunoff_5221)
num [1:720, 1:360, 1:12] NA NA NA NA NA NA NA NA NA NA ... (the initial NA's
are nothing to worry abo
Dear all,
I've been using R on a Mac to process some data for export to ArcMap GIS (which
only runs on Windows). ArcMap seems to require tab-delimited data (my data are
in 3 columns), so I've been using the sep="\t" argument. However, this resulted
in strange end-of-line characters when displa
Dear all,
Thanks for the replies so far.
Just to emphasise, I'm not using Excel in any way. I have many many files to
output, so it'd take considerable time to export from R, reprocess in Excel,
then load into Arc! On a PC I'm able to go directly from R to ArcMap (9.3)
without having to go vi
Dear all,
Just to let you know that thanks to your help, I've managed to solve it.
For future reference, if anyone's interested (!), if you're having problems
reading R-generated data from a Mac, into ArcMap on a PC, then ensure that
you're using eol="\r\n" in the write.table command and that
Dear all,
I am trying to calculate the mean of one column for many data frames. The code
I am using is as follows:
> for (i in seq(nrow(index))) {
assign(paste(model, "_mean_",index$year[i], index$month[i], sep=''),
mean(get(paste(model, index$year[i], index$month[i], "[,3
Thanks! I've learnt something there!
Steve
_
We want to hear all your funny, exciting and crazy Hotmail stories. Tell us now
__
R-help@r-project.
Dear all,
I'm attemping to find the overall range of values from column 5 of a series of
data frames, on a per-row basis, and assign the results to a new object. At
present, I'm only able to receive the overall range of all values, whereas I'm
intending to get the results of the range for each
Dear all,
I am attempting to perform a calculation which counts the number of positive
(or negative) values based on the sample mean (on a per-row basis). If the mean
is>0 then only positive values should be counted, and if the mean is <0 then
only negative values should be counted. In cases w
1 - 100 of 136 matches
Mail list logo