On Sat, Sep 10, 2016 at 2:18 PM, Yichao Yu <yyc1...@gmail.com> wrote:

>
>
> On Sat, Sep 10, 2016 at 2:12 PM, Stuart Brorson <s...@cloud9.net> wrote:
>
>> I don't think you can define that in a continuous way.
>>> In general, if your application relies on certain property of the basis,
>>> you should just normalize it that way. If you don't have a requirement
>>> than
>>> you should worry about it.
>>>
>>
>> Thanks for the thoughts.
>>
>> I did a little more thinking and Googling.  My conclusion:
>>
>> Yes, you can define the concept of orientation very clearly.  The
>> orientation of a vector space is given by the determinant of the
>> matrix formed by its basis set (eigenvectors).  The determinant will
>> be either + or -1 for an orthonormal set of  basis vectors.  The sign
>> of the determinant divides the orthonormal set into two equivalence
>> classes of basis vectors, one with + orientation, the other with -.
>>
>
>
> What I actually mean is that you can constraint the result of eigs.
>

Obviously I meant "can't" . You can certainly flip signs to get the
orientation you want but not in a meaningful and unique way.


>
>>
>> I guess the concept of eigenvector/eigevalue is more general than the
>> concept of orientation.  Put another way, you can have a basis set
>> (eigenvectors) without defining the orientation property, but you
>> can't have orientation without first having a basis set.
>>
>> The OP's results from Matlab and Julia belong to opposite
>> equivalence classes, which is what bothers me.  However,
>> from the standpoint of getting a basis, the concept of orientation is
>> not required.  One would need to impose some additional constraint on
>> the eigenvalue decomposition algorithm in order to always get an
>> eigenvector set with the same orientation, and I guess that additional
>> constraint isn't important for the problems usually solved by
>> eigendecomposition.
>>
>> Thanks for making me think about this,
>>
>> Stuart
>>
>>
>>
>>
>> On Sat, 10 Sep 2016, Yichao Yu wrote:
>>
>> On Sat, Sep 10, 2016 at 12:20 PM, Stuart Brorson <s...@cloud9.net> wrote:
>>>
>>> Just a question from a non-mathematician.  Also, this is a math
>>>> question, not a Julia/Matlab question.
>>>>
>>>> I agree that Matlab and Julia are both correct -- within the
>>>> definitions of eigenvector and eigenvalue they compute, it's OK that
>>>> one eigenvector differes between the two by a factor -1.  They're
>>>> probably calling the same library anyway.
>>>>
>>>> However, from a physics perspective, this bugs me.  In physics, an
>>>> important feature of a space is its orientation.  For example,
>>>> physics and engineering undergrads learn early on about about the
>>>> "right-hand rule" for taking cross products of vectors.  This type of
>>>> product imposes an orientation (or chirality) on the vector space
>>>> under consideration.  Real mathematicians may be familiar with
>>>> the wedge product in exterior algebra, which (I belive) also imposes
>>>> an orientation on a vector space.  Wikipedia says:
>>>>
>>>> https://en.wikipedia.org/wiki/Orientation_(vector_space)
>>>>
>>>> The problem I have with Julia and Matlab returning a set of
>>>> eigenvectors where only one eigenvector is the negative of the other
>>>> is that the two eigenvector sets span spaces of opposite
>>>> orientation, so the returns are -- in some sense -- not the same.
>>>>
>>>> So my question:  Is there anything like a set of "oriented"
>>>> eigenvectors?  Is this even possible?  Or am I totally wrong, and
>>>> the two different sets of eigenvectors span the same space independent
>>>> of concerns about orientation?
>>>>
>>>>
>>>
>>> I don't think you can define that in a continuous way.
>>> In general, if your application relies on certain property of the basis,
>>> you should just normalize it that way. If you don't have a requirement
>>> than
>>> you should worry about it.
>>>
>>>
>>>
>>>> Stuart
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, 10 Sep 2016, Tracy Wadleigh wrote:
>>>>
>>>> Looks good to me. The eigenvalues look the same up to the precision
>>>> shown
>>>>
>>>>> and eigenvectors are only unique up to a scalar multiple.
>>>>>
>>>>> On Sep 10, 2016 8:11 AM, "Dennis Eckmeier" <
>>>>> dennis.eckme...@neuro.fchampalimaud.org> wrote:
>>>>>
>>>>> Hi!
>>>>>
>>>>>>
>>>>>> Here is a simple example:
>>>>>>
>>>>>> *Matlab:*
>>>>>> (all types are double)
>>>>>>  covX(1:4,1:4)
>>>>>>
>>>>>>   1.0e+006 *
>>>>>>
>>>>>>     3.8626    3.4157    2.4049    1.2403
>>>>>>     3.4157    3.7375    3.3395    2.3899
>>>>>>     2.4049    3.3395    3.7372    3.4033
>>>>>>     1.2403    2.3899    3.4033    3.8548
>>>>>>
>>>>>> [V,D] = eig(covX(1:4,1:4))
>>>>>>
>>>>>> V =
>>>>>>
>>>>>>     0.2583    0.5402   -0.6576    0.4571
>>>>>>    -0.6622   -0.4518   -0.2557    0.5404
>>>>>>     0.6565   -0.4604    0.2552    0.5403
>>>>>>    -0.2525    0.5404    0.6611    0.4551
>>>>>>
>>>>>>
>>>>>> D =
>>>>>>
>>>>>>   1.0e+007 *
>>>>>>
>>>>>>     0.0006         0         0         0
>>>>>>          0    0.0197         0         0
>>>>>>          0         0    0.3010         0
>>>>>>          0         0         0    1.1978
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Julia:*
>>>>>> (all types are Float 64)
>>>>>> *D = *
>>>>>> *5759.7016...*
>>>>>> *197430.4962...*
>>>>>> *3.0104794 ... e6*
>>>>>> *1.1978 ... e7*
>>>>>>
>>>>>> V =
>>>>>>    0.2583    0.5402    -0.6576     *-0.4571*
>>>>>>    -0.6622   -0.4518   -0.2557    *-0.5404*
>>>>>>     0.6565   -0.4604    0.2552    *-0.5403*
>>>>>>    -0.2525    0.5404    0.6611    *-0.4551*
>>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> D
>>>>>>
>>>>>>
>>>>>> On Saturday, September 10, 2016 at 12:48:17 PM UTC+1, Michele Zaffalon
>>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>> You mean they are not simply permuted? Can you report the MATLAB and
>>>>>>> Julia results in the two cases for a small covX matrix?
>>>>>>>
>>>>>>> On Sat, Sep 10, 2016 at 1:34 PM, Dennis Eckmeier <
>>>>>>> dennis....@neuro.fchampalimaud.org> wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>>>
>>>>>>>> I am new to Julia and rather lay in math. For practice, I am
>>>>>>>> translating
>>>>>>>> a Matlab script (written by somebody else) and compare the result of
>>>>>>>> each
>>>>>>>> step in Matlab and Julia.
>>>>>>>>
>>>>>>>> The Matlab script uses  [V,D] = eig(covX);
>>>>>>>>
>>>>>>>> which I translated to Julia as: (D,V) = eig(covX)
>>>>>>>>
>>>>>>>> However, the outcomes don't match, although covX is the same.
>>>>>>>>
>>>>>>>> D.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>>
>

Reply via email to