A client library may be another option, I haven't looked into it. Off the top of my head I doubt the cascalog code distribution for the m/r jobs would work with any clients. This is akin to deploying new code to run on the cluster, in this case via a repl, so I don't think any existing API is going to support it. You could certainly connect a client that runs PIG jobs or something, but that's a lot less powerful than being able to distribute just-defined clojure functions out to the cluster as map operations on the fly :-)
Sort've like the difference between being able to connect to postgres via odbc and run sql, and spinning up a repl inside of postgres that can dynamically define functions that play with the raw data On Wednesday, September 3, 2014 12:08:52 AM UTC+10, Jony Hudson wrote: > > On Tuesday, 2 September 2014 01:36:49 UTC+1, Beau Fabry wrote: > >> Just a little bit of showing off of the previous post :-) >> http://i.imgur.com/zpfP9Ja.png >> > > Nice! Would love to hear more about how you use it. I've only tinkered > with Hadoop locally, so I'm very fuzzy on the concepts - you need to run > Gorilla in-process on one of the hadoop nodes, rather than connecting to > hadoop with a "client" library? > > > Jony > -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en --- You received this message because you are subscribed to the Google Groups "Clojure" group. To unsubscribe from this group and stop receiving emails from it, send an email to clojure+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.