Hi All
I've been solving the following optimisation problem in sage and was
wondering if there's a more efficient method than my current one.
I have a large matrix A and want to solve xA=B, where we can assume
B=(1,0,0,0,...). The matrix has quite large nullity and I want the
solution which has l1 norm as small as possible, (that is I want to
minimise sum |x_i|).
My current method is to write x_i=x_+i-x_-i so that the norm is sum
(x_+i+x_-i). The problem is thus a linear programming one with number
of variables twice the number of rows in A, all of which are positive,
with a lot of equality constraints.
It seems to me that this is producing an unnecessarily big problem and
that it would be simpler to use Sage to find a solution of the equation
and the nullspace of the matrix, and then we want to minimise the norm
over all members of the nullspace + the solution. However the norm of a
linear combination of null vectors and the solution is no longer a nice
function so I can't see how to split up my variables to give a linear
objective.
I looked at using cvxopt to solve a convex optimisation problem but it
looks like cvxopt requires the objective function to be differentiable
everywhere, which the l1 norm isn't.
Are there any packages that provide the functionality I'm looking for?
If not can my current method be improved upon?
Many thanks
Alastair Irving
--
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to
sage-support+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org