(defn persist-rows
  [headers rows id]
  (let [mrows (transform-rows rows id)]
    (with-db *db* (try
         (apply insert-into-table
                :my-table
                [:col1 :col2 :col3]
                mrows)))
     nil ))

(defn filter-data
  [rows item-id header id]
   (persist-rows
      header
      (filter #(= (:item_id %) item-id) rows)
      id))

(dorun (pmap #(filter-data rows %1 header %2)
             items id ))

On Dec 16, 4:45 pm, Michael Ossareh <ossa...@gmail.com> wrote:
> On Thu, Dec 16, 2010 at 09:19, clj123 <ariela2...@gmail.com> wrote:
> > Hello,
>
> > I'm trying to insert in a database large number of records, however
> > it's not scaling correctly. For 100 records it takes 10 seconds, for
> > 1000000 records it takes 2 min to save. But for 2500000 records it
> > throws Java Heap out of memory exception.
>
> > I've tried separting the records processing and the actual batch save.
> > Just processing the 2500000 records in memory it take 30 seconds. With
> > batch insert it throws the above exception. I don't understand why
> > saving to a database it creates more Java Heap space.
>
> > Any ideas would be appreciated.
>
> What indexes are on the table that you're inserting into? To me the increase
> in time suggests your index is being rebuilt after each insert.
>
> As for the memory, I concur with zeph, you're either holding onto the head
> of a seq or you're accessing some portion of a string which is holding the
> data structures around and you're OOMing as a result.
>
> Code please :)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to