if you have questions/comments! I will attending at the
> Spark Summit East, and can meet in person to discuss any details.
>
> -regards,
> Rajesh
>
>
> - Forwarded by Randy Swanberg/Austin/IBM on 01/21/2016 09:31 PM -
>
> From: "Ulanov, Alexander"
&g
default NVBLAS_TILE_DIM==2048 is too big for my graphic card/matrix size. I
>> handpicked other values that worked. As a result, netlib+nvblas is on par
>> with BIDMat-cuda. As promised, I am going to post a how-to for nvblas
>> configuration.
>>
>>
>> https
nyway
on the CPU, not OpenBLAS).
On an even deeper level, using natives has consequences to JIT and GC which
isn't suitable for everybody and we'd really like people to go into that
with their eyes wide open.
On 26 Mar 2015 07:43, "Sam Halliday" wrote:
> I'm not at a
vblas
>> configuration.
>>
>>
>> https://docs.google.com/spreadsheets/d/1lWdVSuSragOobb0A_oeouQgHUMx378T9J5r7kwKSPkY/edit?usp=sharing
>>
>>
>>
>> -Original Message-
>> From: Ulanov, Alexander
>> Sent: Wednesday, March 25, 2015 2:31 PM
>
ntil then?
>
> On Wed, Mar 25, 2015 at 3:07 PM, Sam Halliday
> wrote:
>
>> Yeah, MultiBLAS... it is dynamic.
>>
>> Except, I haven't written it yet :-P
>> On 25 Mar 2015 22:06, "Ulanov, Alexander"
>> wrote:
>>
>>> Netlib kno
h at the
> runtime by providing another library. Sam, please suggest if there is
> another way.
>
>
>
> *From:* Dmitriy Lyubimov [mailto:dlie...@gmail.com]
> *Sent:* Wednesday, March 25, 2015 2:55 PM
> *To:* Ulanov, Alexander
> *Cc:* Sam Halliday; dev@spark.apache.org; Xiangrui
cblas shared library to use nvblas through netlib-java. Fedora does
>> not have cblas (but Debian and Ubuntu have), so I needed to compile it. I
>> could not use cblas from Atlas or Openblas because they link to their
>> implementation and not to Fortran blas.
>>
>> Be
_oeouQgHUMx378T9J5r7kwKSPkY/edit?usp=sharing
>
> Best regards, Alexander
>
> -Original Message-
> From: Sam Halliday [mailto:sam.halli...@gmail.com]
> Sent: Tuesday, March 03, 2015 1:54 PM
> To: Xiangrui Meng; Joseph Bradley
> Cc: Evan R. Sparks; Ulanov, Alexander;
t;> It seems like this is going to be somewhat exploratory for a while (and
>>>> there's probably only a handful of us who really care about fast linear
>>>> algebra!)
>>>>
>>>> - Evan
>>>>
>>>> On Mon, Feb 9, 2015 at
ublas
> experiments? You can tell it by watching CPU/GPU usage.
>
> Best,
> Xiangrui
>
> On Thu, Feb 26, 2015 at 10:47 PM, Sam Halliday
> wrote:
> > Don't use "big O" estimates, always measure. It used to work back in the
> > days when double multipl
2 seconds.
> The CPU->GPU->CPU version finished in 2.2 seconds.
> The GPU version finished in 1.7 seconds.
>
> I'm not sure whether my CPU->GPU->CPU code simulates the netlib-cublas
> path. But based on the result, the data copying overhead is definitely
> not as big
py the
> result back from GPU only when needed. However, to be sure, I am going to ask
> the developer of BIDMat on his upcoming talk.
>
>
>
> Best regards, Alexander
>
>
> From: Sam Halliday [mailto:sam.halli...@gmail.com]
> Sent: Thursday, February 26, 2015 1:56
s is going to be somewhat exploratory for a while (and
> >>> there's probably only a handful of us who really care about fast linear
> >>> algebra!)
> >>>
> >>> - Evan
> >>>
> >>> On Mon, Feb 9, 2015 at 4:48 PM, Ulano
;>> ticket? (Here's one: https://issues.apache.org/jira/browse/SPARK-5705)
> >>>
> >>> It seems like this is going to be somewhat exploratory for a while (and
> >>> there's probably only a handful of us who really care about fast linear
>
14 matches
Mail list logo