On 25/10/15 17:14, John Sorkin wrote:
Bert Talking about Loglan and problems with the imprecise nature of
English, which sense of sanction do you mean
to authorize, approve, or allow: an expression now sanctioned by
educated usage. to ratify or confirm: to sanction a law. to impose a
sanction on
Bert
Talking about Loglan and problems with the imprecise nature of English, which
sense of sanction do you mean
to authorize, approve, or allow: an expression now sanctioned by educated usage.
to ratify or confirm: to sanction a law.
to impose a sanction on; penalize, especially by way of discip
On 25/10/15 12:33, Bert Gunter wrote:
Rolf's solution works for the situation where all duplicated values
are contiguous, which may be what you need. However, I wondered how it
could be done if this were not the case. Below is an answer. It is not
as efficient or elegant as Rolf's solution for
I'm trying to generate prediction of the column "dubina" using this
algorithm made in R's "neuralnet" package. But I keep getting non-reliable
neural-net output. I have tried changing the number of hidden layers,
normalizing and denormalizing data. Is there a mistake in the algorithm,
maybe because
Hi Jim,
Conventionally, the units listed in a data description should represent
how to interpret the values of a variable. You would label the wt-axis in a
graph with the unit "1000 lb," so that should be the unit in the data
description. At least that is how data dictionaries in the statistics
On Sat, 24 Oct 2015, Bert Gunter wrote:
Rolf's solution works for the situation where all duplicated values
are contiguous, which may be what you need. However, I wondered how it
could be done if this were not the case. Below is an answer. It is not
as efficient or elegant as Rolf's solution for
Good day,
I am looking for assistance in applying R to credit risk work mainly:
1. Credit scoring where a client can get any one credit risk rating from more
than two possible ratings e.g Very Good, Good, Fair, Bad, Very Bad.
2.Economic capital calculations and RAROC models.
Would be grateful fo
On 24/10/15 21:10, Jim Lemon wrote:
Hi Ming,
In fact, the notation lb/1000 is correct, as the values represent the
weight of the cars in pounds (lb) divided by 1000. I am not sure why this
particular transformation of the measured values was used, but I'm sure it
has caused confusion previously.
I sanction this discussion.
(Google on "auto-antonyms")
Cheers,
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
-- Clifford Stoll
On Sat, Oct 24, 2015 at 4:26 PM, Duncan Murdoch
wrote:
> On 24/10/2015 6:07 PM, Rolf Turner wro
Reading the help page with attention I saw that this function fits a
regression model for "circular dependent and linear independent".
So my question now is, is there a way to fit a model been with the circular
as the dependent variable?
Thanks
Antonio
2015-10-24 20:19 GMT-02:00 Antonio Silva
Rolf's solution works for the situation where all duplicated values
are contiguous, which may be what you need. However, I wondered how it
could be done if this were not the case. Below is an answer. It is not
as efficient or elegant as Rolf's solution for the contiguous case I
think; maybe someone
On 24/10/2015 6:07 PM, Rolf Turner wrote:
> On 24/10/15 21:10, Jim Lemon wrote:
>> Hi Ming,
>> In fact, the notation lb/1000 is correct, as the values represent the
>> weight of the cars in pounds (lb) divided by 1000. I am not sure why this
>> particular transformation of the measured values was u
On 25/10/15 11:28, John Sorkin wrote:
I have a file that has (1) Line numbers, (2) IDs. A given ID number can appear
in more than one row. For each row with a repeated ID, I want to add a number
that gives the sequence number of the repeated ID number. The R code below
demonstrates what I want
I have a file that has (1) Line numbers, (2) IDs. A given ID number can appear
in more than one row. For each row with a repeated ID, I want to add a number
that gives the sequence number of the repeated ID number. The R code below
demonstrates what I want to have, without any attempt to produce
On Sat, Oct 24, 2015 at 1:35 PM, Duncan Murdoch
wrote:
>
> > However, editing the file with a text editor to create "proper" EOF
> > doesn't help.
>
> The problem is that you have valid-looking JSON objects on each odd
> numbered line, separated by single blank lines. The parser expects an
> EOF
Dear R users
I'm trying to reproduce the results from Lowry et al. 2007 Lunar landings -
Relationship between lunar phase and catch rates for an Australian
gamefish-tournament fisheryFisheries Research 88: 15–23
Basically we have two columns: Lunar days and CPUE (catch per unit effort).
The aim
I'm using some regex in a for loop to check for some values in column
"source", and put a result in column "fuente".
I need some advice on this topics:
-Making the regex parts work, I've tasted them in regexpal.com and they
work, but not in R.
-Making the code (for loop) more efficient, more cle
On 24/10/2015 12:11 AM, K. Elo wrote:
> Hi!
>
> You can download the example file with this link:
> https://www.dropbox.com/s/tlf1gkym6d83log/example.json?dl=0
>
> BTW, I have used a JSON validator and the problem seems to related to
> wrong/missing EOF.
>
> --- snip ---
> Error: Parse error on
If you want to use the lattice way of doing things, why are you using ggplot?
`+` is defined for the output of ggplot (class "waiver") on the left, and the
output of a layer function ("proto") on the right. The design of ggplot assumes
left-to-right evaluation, which your first attempt failed to
Hi Ming,
In fact, the notation lb/1000 is correct, as the values represent the
weight of the cars in pounds (lb) divided by 1000. I am not sure why this
particular transformation of the measured values was used, but I'm sure it
has caused confusion previously.
Jim
On Sat, Oct 24, 2015 at 11:59 A
20 matches
Mail list logo