Very cool stuff!

I notice that you are specialising the RBM to a specific matrix 
implementation (Clatrix / JBlas) in the file "jblas.clj". Are you sure you 
need to do that? Part of the beauty of core.matrix is that you should be 
able to write your algorithms in an implementation-independent manner and 
still get the performance benefits of the optimised implementation when you 
need it.

For example, the core.matrix protocols (mmul, add!, add, inner-product, 
transpose etc.) should all call the right Clatrix implementation without 
any noticeable loss of performance (if they don't that's an implementation 
issue in Clatrix... would be good to unearth these!).

If the core.matrix API is insufficient to implement what you need, then I'd 
love to get issues / PRs (either for core.matrix or Clatrix).

On Monday, 5 January 2015 07:07:11 UTC+8, Christian Weilbach wrote:
>
> -----BEGIN PGP SIGNED MESSAGE----- 
> Hash: SHA1 
>
> Hi all, 
>
> - From the README: 
>
> This library is supposed to implement Boltzmann Machines, Autoencoders 
> and related deep learning technologies. All implementations should 
> both have a clean high-level mathematical implementation of their 
> algorithms (with core.matrix) and if possible, an optimized and 
> benchmarked version of the core routines for production use. This is 
> to facilitate learning for new users or potential contributors, to be 
> able to implement algorithms from papers/other languages and then tune 
> them for performance if needed. 
>
> This repository is supposed to cover techniques building on Restricted 
> Boltzmann Machines, like Deep Belief Networks, Deep Boltzmann Machines 
> or temporal extensions thereof as well as Autoencoders (which I am not 
> familiar enough with yet). Classical back-propagation is also often 
> used to fine-tune deep models supervisedly, so networks should support 
> it as well. 
>
>
>
> I haven't build myself deep belief networks out of it yet, but this 
> should be fairly straightforward. Also combination with the usual 
> linear classifiers (logistic regression, SVM) at the top layer can be 
> explored. If somebody has interest/experience in/with implementing 
> standard backpropagation, go ahead and open a pull-request :-). 
>
> Christian 
> -----BEGIN PGP SIGNATURE----- 
> Version: GnuPG v1 
>
> iQEcBAEBAgAGBQJUqceeAAoJEKel+aujRZMkJHoIAKkAbgZjvs9pzmJjzJf5Y1sg 
> EQCwf7W6Vrz0rvDtrkSiRNO+rmSEL4TpWPPlHLTYWs781Wrz9FRmkmHzR0mZ8izT 
> kWsQ3rP4TjDUDiB8S34CQxA15YLRfbvIxVv2JBfkGBWo64NHSrNUxz+Dfvu2jzbi 
> at614o/T5lZQ6qzkyputYwzOocX58AcnCtfXDVO2UJt8RU/q33FVugjtXtvsDxgM 
> AOO4WnW6mzYvLUbrhksDjuLShhs2EoCMB54cB2W5ejz+6X3oFeF/xndFqtNYdwPF 
> d13q60Ex0s/IqIo3mOwB/O1rOnsBHxiQ6nuSaphMAm7jJF9wHtDaXHWRZHa2RTg= 
> =BjnJ 
> -----END PGP SIGNATURE----- 
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to