With web applications I don't recommend having the DataContext stored
in the users session. I think you are better off having a new
DataContext created for each request. This will enable your
DataContext to be GC'd after each request.
My experience is that creating DataContext's is very cheap. I
Then I don't think these are viable options, my dev server uses Java
1.5.
What I was hoping for is sort of a simple how-to on best practices
when cleaning up after a large query. Especially when there are many
sessions anticipated.
On Aug 13, 2009, at 4:45 PM, Michael Gentry wrote:
Hey guys--
I'm seeing a problem with inconsistent object data across contexts using a
shared cache and am hoping someone can shed some light for me.
I've got a situation where 2 threads (each with its own context) winds up
interacting with the same persistent object.
The first thread updates the
FWIW, jmap -dump is only on Java 1.6, not 1.5.
mrg
On Thu, Aug 13, 2009 at 3:29 PM, Tore Halset wrote:
> Hello.
>
> It is hard to tell where the memory problems are without looking at the
> actual used memory. I normally use jmap to dump memory info and then jhat on
> a different computer to ana
Hello.
It is hard to tell where the memory problems are without looking at
the actual used memory. I normally use jmap to dump memory info and
then jhat on a different computer to analyze the dump.
jmap -dump:live,file=filename pid
jhat -J-Xmx10G filename
Depending on your heap size, jhat
Background:
I have been attempting to do as much performance tuning as I can given
the visibility of the middleware I am using, but am running into
severe "out of memory" errors with Tomcat on my production server. My
current theory is that I may have missed something concerning how to
p
Hi Andrus,
Andrus Adamchik schrieb:
On Aug 13, 2009, at 12:36 PM, Andreas Hartmann wrote:
Of course I could commit the transaction each 1000 rows or so, but I'd
rather commit the whole spreadsheet to the DB in a single transaction.
You can use user-defined transaction scope, then committi
This is not fully on topic (see my other email in this thread which
was more to the point), still... there's one cool feature that I've
discussed with somebody offline some time ago. Often you need to work
offline and then synchronize the data saved offline back to the server
when the clien
On Aug 6, 2009, at 12:49 AM, Michael Alderton-Smith wrote:
Simple question at the end of the day can you use the lighter
client
classes locally somehow?
Yes. Try using org.apache.cayenne.remote.service.LocalConnection for
your local work. It still incurs the overhead of two layers of
Actually DbAdapter does not abstract all JDBC interaction. You will
have to re-implement the DataNode.performQuery() method at the minimum.
Andrus
On Aug 7, 2009, at 11:46 PM, Mike Kienenberger wrote:
No non-jdbc examples that I know of, but I'd say you needed to create
your own implementati
On Aug 13, 2009, at 12:36 PM, Andreas Hartmann wrote:
Of course I could commit the transaction each 1000 rows or so, but
I'd rather commit the whole spreadsheet to the DB in a single
transaction.
You can use user-defined transaction scope, then committing every 1000
rows will allow Jav
Hi everyone,
I'm facing the following situation:
I'm importing arbitrary spreadsheets with quite large numbers of rows. A
row represents a recipient of a mailing. The spreadsheet can contain
arbitrary columns, so I chose the following entity model:
* RecipientSet
-> m Fields (i.e. spreadshe
12 matches
Mail list logo