walterddr commented on a change in pull request #9732: [FLINK-14153][ml] Add to 
BLAS a method that performs DenseMatrix and SparseVector multiplication.
URL: https://github.com/apache/flink/pull/9732#discussion_r336805240
 
 

 ##########
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/linalg/BLAS.java
 ##########
 @@ -131,19 +168,56 @@ public static void gemm(double alpha, DenseMatrix matA, 
boolean transA, DenseMat
        }
 
        /**
-        * y := alpha * A * x + beta * y .
+        * Check the compatibility of matrix and vector sizes in 
<code>gemv</code>.
         */
-       public static void gemv(double alpha, DenseMatrix matA, boolean transA,
-                                                       DenseVector x, double 
beta, DenseVector y) {
+       private static void gemvDimensionCheck(DenseMatrix matA, boolean 
transA, Vector x, Vector y) {
                if (transA) {
-                       assert (matA.numCols() == y.size() && matA.numRows() == 
x.size()) : "Matrix and vector size mismatched.";
+                       Preconditions.checkArgument(matA.numCols() == y.size() 
&& matA.numRows() == x.size(),
+                               "Matrix and vector size mismatched.");
                } else {
-                       assert (matA.numRows() == y.size() && matA.numCols() == 
x.size()) : "Matrix and vector size mismatched.";
+                       Preconditions.checkArgument(matA.numRows() == y.size() 
&& matA.numCols() == x.size(),
+                               "Matrix and vector size mismatched.");
                }
+       }
+
+       /**
+        * y := alpha * A * x + beta * y .
+        */
+       public static void gemv(double alpha, DenseMatrix matA, boolean transA,
+                                                       DenseVector x, double 
beta, DenseVector y) {
+               gemvDimensionCheck(matA, transA, x, y);
                final int m = matA.numRows();
                final int n = matA.numCols();
                final int lda = matA.numRows();
                final String ta = transA ? "T" : "N";
                NATIVE_BLAS.dgemv(ta, m, n, alpha, matA.getData(), lda, 
x.getData(), 1, beta, y.getData(), 1);
 
 Review comment:
   any reason why `DenseVector` uses `NATIVE_BLAS` while the `SparseVector` 
uses `F2J_BLAS`
   
   I think at least on a specific level (0,1,2,3 or up) we should probably only 
use one specific BLAS version unless specific reason comes up (IMO it should be 
some very strong justifications)
   
   FYI: I am not sure whether this is related. Some suggestions on 
[stack](https://stackoverflow.com/questions/41825022/why-spark-blas-use-f2jblas-instead-of-native-blas-for-level-1-routines)
 shows that there are some performance considerations coming from latest 
development from the JIT compiler

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to