James Stroud wrote: > Hello All, > > I'm using numpy to calculate determinants of matrices that look like > this (13x13): > > [[ 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] > [ 1. 0. 1. 4. 1. 9. 4. 4. 1. 1. 4. 9. 4. 9.] > [ 1. 1. 0. 1. 4. 4. 9. 9. 4. 4. 1. 4. 1. 4.] > [ 1. 4. 1. 0. 9. 1. 4. 4. 9. 1. 4. 1. 4. 1.] > [ 1. 1. 4. 9. 0. 4. 4. 4. 1. 4. 1. 9. 4. 9.] > [ 1. 9. 4. 1. 4. 0. 4. 4. 9. 4. 1. 1. 4. 1.] > [ 1. 4. 9. 4. 4. 4. 0. 1. 1. 1. 9. 1. 9. 4.] > [ 1. 4. 9. 4. 4. 4. 1. 0. 4. 1. 9. 4. 4. 1.] > [ 1. 1. 4. 9. 1. 9. 1. 4. 0. 4. 4. 4. 4. 9.] > [ 1. 1. 4. 1. 4. 4. 1. 1. 4. 0. 9. 4. 9. 4.] > [ 1. 4. 1. 4. 1. 1. 9. 9. 4. 9. 0. 4. 1. 4.] > [ 1. 9. 4. 1. 9. 1. 1. 4. 4. 4. 4. 0. 4. 1.] > [ 1. 4. 1. 4. 4. 4. 9. 4. 4. 9. 1. 4. 0. 1.] > [ 1. 9. 4. 1. 9. 1. 4. 1. 9. 4. 4. 1. 1. 0.]] > > For this matrix, I'm getting this with numpy: > > 2774532095.9999971 > > But I have a feeling I'm exceeding the capacity of floats here.
It's not that you're exceeding the capacity of float64 numbers, it's just that there are floating point calculations taking place. The way the determinant is calculated is by doing an LU decomposition and then multiplying down the diagonal. Although all of your entries started as integers, floating point error does accumulate. The answer that you got is within finfo(float64).eps of relative error of the actual answer. > Does > anyone have an idea for how to treat this? Is it absurd to think I could > get a determinant of this matrix? Is there a python package that could > help me? If all of your matrices are going to be integers, doing the determinant-by-minors calculations yourself is probably easy enough to code and will retain complete precision. http://mathworld.wolfram.com/DeterminantExpansionbyMinors.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list