@Chached is only used during a single page rendering cycle. It would not
apply to your situation. (as far as I know)
Source:
http://sqllyw.wordpress.com/2008/03/15/new-features-in-tapestry-5011/
Your scenario would be implementable using your own component.
The component would represent a fragment and read the file (even
use a inmemory cache strategy (soft/weakreferences)) and write
the ouput directly to stream (or actually the dom tree of your
document being returned).
Using your own solution enables you to mimic the behavior you talk about.
Another idea would to write / cache only datas needed to render the tables
(e.g. cache only content not markup). Never the less I am in doupt,
if such a solution is necessary (dynamically cache results of
database queries in memory or on disk).
So after all you might want to port your application. As always use
the simpliest solution first. So database queries without any caches.
Once you see any problems (performance is below required) just go for
optimization. Since you have a fallback solution at hand (cron-jobs +
disk fragments) you are at the safe side. But I am in doupt if you
really need the markup being cached. Caching the database results
and recreate markup sounds more reasonable. You might save you lots
of seeking time.
But you always know: Only the code / application will tell you!
Cheers,
Martin (Kersten)
-----Ursprüngliche Nachricht-----
Von: Tobias Marx [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 18. März 2008 17:45
An: Tapestry users
Betreff: @Cached and caching in general
I have not used T5 yet, but would @Cached use the file system for caching
HTML fragments similiar to caching mechanisms in some php frameworks?
Or is this a pure memory-based cache?
I am thinking about migrating an old PHP application to T5 - it has really
a lot of traffic and any users are logged in at the same time.
It is quite a low-level application that is still quite fast due to cron
jobs in the background that generate HTML fragments that are included to
reduce the database-query bottleneck (e.g. grouping/ordering and sorting of
huge tables).
Somehow I don't trust Hibernate for high-performance database queries on
huge tables .... as I think if tables are huge and many people access it, it
will always lead to problems...no matter how good the queries are and how
well you have splitted the data across several tables.
So I think the best solution is always to generate HTML fragments in the
background that take a long time and simple "include" them....this is even
quicker then parsing templates when the data is cached. So you save the time
necessary for querying the database plus the time necessary for processing
the templates that are involved.
Currently the setup on this application uses one-way database replication
and the cron jobs access the the huge data table on the replicated database
and generate those HTML fragments without disturbing the web-applications
performance. So the main application simply includes those HTML fragments
within milliseconds.
But maybe the T5 caching mechanism would make all of those low-level
tricks redundant?
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]