When I create a uberjar with aot compilation I am surprised to see ".clj"
files in there. Then when I run the jar with the "java -jar myuberjar"
command I get a ClassNotFoundException.
For example, I am have the following dependency in my project.clj file:
[org.clojure/java.jdb
Can someone help me write the following function:
I have two lists of maps as inputs:
(def xs [{:id 1 :val 10}
{:id 2 :val 20}
{:id 3 :val 30}
{:id 4 :val 40}])
(def ys [{:id 2 :val 20}
{:id 3 :val 30}
{:id 5 :val 50}
Peter/Tim -
Also want to commend you on this amazing high performant library (deep-
freeze) that you have written.
Shoeb
On Jan 6, 3:05 am, Shoeb Bhinderwala
wrote:
> Thanks a lot Peter. Worked great! I did some rudimentary bench marking
> for large data sets and found deep-freeze to
Thanks a lot Peter. Worked great! I did some rudimentary bench marking
for large data sets and found deep-freeze to be 10 times faster on
average compared to JSON serialization. That is really a huge
performance difference.
On Jan 6, 2:19 am, Peter Taoussanis wrote:
> Oh wow, sorry- I didn't see
Hi Tim
I am using redis-clojure client: https://github.com/tavisrudd/redis-clojure
Below is the complete code listing. The thaw invocation gives me the
error:
java.lang.String cannot be cast to [B - (class
java.lang.ClassCastException)
-- code
(ns my-app
(:require [redis.c
Hi Peter -
I looked at deep-freeze but did not quite understand how to use it.
I used the following to freeze my Clojure complex data structure -
results (map of list of maps) and persist to redis:
(redis/hmset k k (deep-freeze/freeze-to-array results))
Then I tried to retrieve and thaw it as
pr-str or
> data.json/generate-string.
>
> You can then read it back using read-string or the equivalent json fn.
>
> Regards,
> BG
>
> On Wed, Jan 4, 2012 at 12:00 PM, Shoeb Bhinderwala
>
>
>
>
>
>
>
>
>
> wrote:
> > I am trying to use Redis a
I am trying to use Redis as a data structure cache for my clojure
application. Does anybody have experience/code/ideas that can write/
read a clojure complex data structure to the Redis cache.
For example I have a list of maps as shown below:
(def m1
[{
"total" {:end_mv_base 721470021.02M, :r
I want to merge lists of maps. Each map entry has an :id and :code
key. The code associated to :id from one map should have higher
precedence than the same :id entry from another map.
I have an implementation. The problem and solution is best described
using example:
;priority 1 map
(def p1 [{:i
Is there a more elegant/idomatic way to achieve the following result:
user=> a1
["a" "b" "c" "d"]
user=> (map-indexed (fn [n x] (vec (take (inc n) x))) (take (count a1)
(repeat a1)))
(["a"] ["a" "b"] ["a" "b" "c"] ["a" "b" "c" "d"])
--
You received this message because you are subscribed to the
I agree. Thanks for general guidance on using parameterized queries. I
will switch to use prepared statements instead.
On Oct 22, 3:51 am, Alan Malloy wrote:
> Yep. Rpeating you for emphasis, not repeating myself to disagree with
> you.
>
> On Oct 22, 12:37 am, Sean Corfield wrote:
>
>
>
>
>
>
>
')"
>
> Would be a way to do it. Interpose returns a lazy sequence so you need to
> apply str to realize the sequence.
>
> Luc P.
>
> On Fri, 21 Oct 2011 17:54:41 -0700 (PDT)
>
>
>
>
>
>
>
>
>
> Shoeb Bhinderwala wrote:
> > Hi
>
>
Hi
I wrote the following function to create a SQL IN clause from a list
of values. Essentially the function creates a single string which is a
comma separated quoted list of the values surrounded by parenthesis.
user=> (def xs [1 2 3 4 5])
user=>(str "('" (first xs) (reduce #(str %1 "', '" %2) "
Thanks. Didn't think it would exist in clojure.core.
On Oct 4, 4:58 pm, Ulises wrote:
> your subject contains the answer :)
>
> sandbox> (def s1 (seq ["s1" (seq ["s2" "s3"]) "s4" "s5" (seq ["s6"
> (seq ["s7" "s8"]) "s9"])]))
> #'sandbox/s1
> sandbox> s1
> ("s1" ("s2" "s3") "s4" "s5" ("s6" ("s7" "
(def s1
(seq
["s1"
(seq ["s2" "s3"]) "s4" "s5" (seq ["s6" (seq ["s7" "s8"])
"s9"])]))
user => s1
("s1" ("s2" "s3") "s4" "s5" ("s6" ("s7" "s8") "s9"))
How to convert s1 to a flat sequence like this:
("s1" "s2" "s3" "s4" "s5" "s6" "s7" "s8" "s9")
--
You received this message because
Many thanks for adding this feature. Without it the Clojure code would
have been left in the dust.
On Aug 9, 11:46 pm, Sean Corfield wrote:
> On Tue, Aug 9, 2011 at 9:39 AM, Shoeb Bhinderwala <
>
> shoeb.bhinderw...@gmail.com> wrote:
> > With these options added the Clojure
Hi Sean –
With these options added the Clojure code runs just about as fast as
Java. I set the fetch size to 1000 for both of them.
Average run times to load 69,000 records:
Java = 2.67 seconds
Clojure = 2.72 seconds
Thanks
Shoeb
On Aug 9, 12:54 am, Sean Corfield wrote:
> On Mon, Aug 8,
Sean/Stuart/Others -
My apologies to the group. I found out why my Clojure code runs slower
than Java.
The Java code uses the setFetchSize() method to retrieve data in
batch:
myResultSet.setFetchSize(1000);
myResultSet.setFetchDirection(ResultSet.FETCH_FORWARD);
Without
You are right Michael. I misunderstood Colin's statement.
As Stuart suggested I am profiling the code and will share the results
with the group soon.
On Aug 8, 4:33 am, Michael Wood wrote:
> Hi Shoeb
>
> On 7 August 2011 01:51, Shoeb Bhinderwala wrote:
>
> > I am not g
I switched to clojure.java.jdbc. Found no difference at all. It is
still about 10 times slower than java.
On Aug 6, 8:54 pm, Sean Corfield wrote:
> On Sat, Aug 6, 2011 at 4:51 PM, Shoeb Bhinderwala <
>
> shoeb.bhinderw...@gmail.com> wrote:
> > In one test case, I loaded 69,
would also time the cost of creating 10 clojure maps of
> a similar structure. Finally - 100,000 is big enough to give a small heap
> size worriesare the jvm settings the same?
>
> Sent from my iPad
>
> On 6 Aug 2011, at 19:11, Shoeb Bhinderwala
> wrote:
>
&
s is the cause
> of the problem.
> Sunil.
>
> On Sat, Aug 6, 2011 at 11:40 PM, Shoeb Bhinderwala <
>
>
>
>
>
>
>
> shoeb.bhinderw...@gmail.com> wrote:
> > Problem summary: I am running out of memory using pmap but the same code
> > works with re
I am loading about 100,000 records from the database with
clojure.contrib.sql, using a simple query that pulls in 25 attributes
(columns) per row. Most of the columns are of type NUMBER so they get loaded
as BigDecimals. I am using Oracle database and the jdbc 6 driver (
com.oracle/ojdbc6 "11.1.0.7
Problem summary: I am running out of memory using pmap but the same code
works with regular map function.
My problem is that I am trying to break my data into sets and process them
in parallel. My data is for an entire month and I am breaking it into 30/31
sets - one for each day. I run a function
I setup Vim to use nailgun server. I start the nailgun server using
"lein vimclojure" plugin and use the VimClojure vim plugin to connect
to it. Everything works great and I can start a REPL inside Vim using
the command :ClojureRepl. All of this is on Windows 7.
However, I want to start a REPL on
Fairly new to clojure. When I was browsing a solution to one of the
problems in project Euler, I came across a solution that used a
recursive var definition.
;By considering the terms in the Fibonacci sequence whose values do
not
;exceed four million, find the sum of the even-valued terms.
(def fi
Wrote the following function which did the trick:
(defn partn [data]
(let
[add-mapping
(fn [m v]
(assoc m v (filter #(= v (:a1 %)) data)))]
(reduce add-mapping {} (set (map #(:a1 %) data
On Sat, Apr 16, 2011 at 8:15 PM, shuaybi wrote:
> I am trying to write a functio
27 matches
Mail list logo