This is exactly what I am using, it is a wrapper around LinkedHashMap
that someone else developed here:
http://www.source-code.biz/snippets/java/6.htm

I think I also used an LRU put out by SUN in my previous versions.

On Nov 30, 2007 9:13 AM, Christian Edward Gruber
<[EMAIL PROTECTED]> wrote:
> If you just override removeEldestEntry() on LinkedHashMap, you'll get
> LRU behaviour.  You can subclass it and set a max capacity, and LRU if
> you've hit capacity.  Not a heavy implementation, but it might reduce
> a dependency if you're only importing commons-collections for the LRU.
>
> christian.
>
>
> On 30-Nov-07, at 8:34 AM, Daniel Jue wrote:
>
> > I have a primitive caching implementation that I like to think of as a
> > "conversation", even though it is not comparable to the kinds of
> > persistence methods you guys are talking about.
> >
> > Mine involves storing large computed data in the User's ASO (such as a
> > report data structure with it's data).  Inside the User's ASO, I have
> > a LRU Hashmap (copied one off the net) that acts as the cache.
> > I use very unsophisticated but unique keys, which are the SQL used to
> > generate that specific report data.  The LRU has an upper limit (like
> > 5), and for the 6th report run by that user, it just forgets the one
> > that has been least used.  I have a common page to display all
> > reports, and that page checks the User's LRU cache before trying to
> > execute a 30 second SQL request.
> > Before the report page is called, I copy some data from the user's
> > selections into the new page instance.  That data uses regular
> > @Persist.  From that data, the page can calculate the key (the SQL)
> > and then check for cached data OnSetup.  So this works for up to (n=5)
> > windows, and it's snappy if the same report has been recently run by
> > the user.
> > Wow it sounds a lot more complicated-but-elementary when I write it
> > out.
> >
> > Come to think of it--I should just @Persist the key value, which is
> > smaller than the all the setting data used to construct the key.
> >
> > PS-- if you see something very wrong about this picture, please let
> > me know!
> >
> > Daniel Jue
> >
> >
> > On Nov 29, 2007 8:01 PM, Kalle Korhonen <[EMAIL PROTECTED]>
> > wrote:
> >> Of course, nothing prevents one writing a semi-automatic workspace
> >> management layer on top of Seam that would take care of detecting and
> >> closing abandoned conversations (for example, along the lines I
> >> suggested on
> >> the Trails list). The Seam guys have carefully removed any
> >> dependencies to
> >> JSF. In practice, integrating Tap5 with Seam might be the fastest
> >> way of
> >> getting practical results for a conversational scope, and wouldn't
> >> solve
> >> only one but two problems at the same time (conversations and
> >> session-per-conversation), of course at the expense of tying the
> >> implementation more closely with Hibernate or at least JPA, but
> >> that's
> >> probably what the majority is using anyway. I'm sure the Seam guys
> >> would
> >> love to see Tapestry support for Seam. And btw, Wicket guys have
> >> already
> >> done this. Given that you Geoff are probably inclined to use J2EE
> >> container
> >> anyway, wouldn't it make sense to you to start looking at creating
> >> tapestry-seam integration project? It might be an interesting
> >> project to
> >> take on for me as well.
> >>
> >> Kalle
> >>
> >>
> >>
> >> On 11/28/07, Kalle Korhonen <[EMAIL PROTECTED]> wrote:
> >>>
> >>> I completely agree with Geoff that a good-enough generic support for
> >>> conversations could make developing web applications much easier
> >>> and it's
> >>> one of the remaining big issues that web frameworks typically
> >>> don't offer a
> >>> solution for out-of-the-box. Seam's got a solution that works well
> >>> for
> >>> typical enterprise apps that may have high amount of interaction
> >>> with the
> >>> database but don't have a huge number of users. While Seam ignores
> >>> the
> >>> problem of closing abandoned conversations, it'll quickly lead to
> >>> much
> >>> higher memory consumption as open conversations generally occupy
> >>> memory
> >>> until explicitly closed or the session is expired.
> >>>
> >>> There's been various tries at solving the conversation support for
> >>> Tapestry and we are planning on supporting conversation in Trails
> >>> with a
> >>> tighter memory management model for better scalability. I've
> >>> written some
> >>> notes on session-per-conversation at
> >>> http://archive.trails.codehaus.org/users/[EMAIL PROTECTED]'s
> >>>  relevant for this discussion as well. For Tap5, you can of course
> >>> come up with your own solution, but it'd be great if the framework
> >>> had a
> >>> generic support for conversations that would work well enough in
> >>> the most
> >>> common cases out-of-the-box and could be extended.
> >>>
> >>> Kalle
> >>>
> >>>
> >>> On 11/28/07, Thiago HP <[EMAIL PROTECTED]> wrote:
> >>>>
> >>>> On 11/28/07, Francois Armand <[EMAIL PROTECTED]> wrote:
> >>>>>
> >>>>> I completely agree with your remarks, and it's a kind of pity
> >>>>> that T5
> >>>> is
> >>>>> such in advance in so many areas, and  in the same time have to
> >>>>> deals
> >>>> by
> >>>>> hand with that.
> >>>>
> >>>>
> >>>> Let's not forget that Tapestry 5 is still alpha and there are other
> >>>> areas
> >>>> needing work too, AJAX being one of the most anticipated ones. In
> >>>> addition,
> >>>> it has a very flexible architecture that allows developers (Howard,
> >>>> other T5
> >>>> comitters or me or you) to implement any missing feature. ;)
> >>>>
> >>>> Thiago
> >>>>
> >>>
> >>>
> >>
> >
>
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to