> But I don't see how I could avoid those objects ending up in the shared > object cache anyway. Can I?
I know some users have tried turning off shared cache in the modeler at the DataDomain level. This results in attaching a separate snapshot cache to each ObjectContext individually. I haven't tried such config myself [*], so I can't comment on the results, but presumably this helps in certain scenarios. Andrus [*] as you may have noticed before, my typical config is shared object cache, a single shared ObjectContext for read operations with local cache attached, and multiple short-lived on-demand contexts for writes. > On Dec 21, 2017, at 7:55 PM, Musall, Maik <m...@selbstdenker.ag> wrote: > > So far, I don't use query caches. This application grows to tens of GB RAM > from filling the shared object cache alone, I use short-lived ObjectContexts, > and I don't really want or need another level of caching that I could forget > to invalidate. Plus I don't have that many explicit queries anyway. Users are > navigating object graphs all the time. > > This one table is different. I _might_ be using a query cache for that. Could > be sensible to do in this case, and invalidation would also be trivial. > > But I don't see how I could avoid those objects ending up in the shared > object cache anyway. Can I? > > Maik > > >> Am 21.12.2017 um 16:07 schrieb John Huss <johnth...@gmail.com>: >> >> It's going to depend on which cache you mean. The query cache can be >> cleared by setting a cache group on the query that fetches the objects and >> then removing that cache group later. >> >> The shared object cache can be cleared by finding the objects you want in >> context.getGraphManager().registeredNodes() and then invalidating them one >> by one. It would better to use the query cache. >> >> On Thu, Dec 21, 2017 at 6:48 AM Musall, Maik <m...@selbstdenker.ag> wrote: >> >>> Hi Michael, >>> >>> how to deal with the caches is basically my actual question. Ideally, I'd >>> like to call something like myentity.truncateTable(), and just have all >>> data deleted and all caches purged by that, but of course that doesn't >>> exist yet. >>> >>> Maik >>> >>> >>>> Am 21.12.2017 um 13:13 schrieb Michael Gentry <blackn...@gmail.com>: >>>> >>>> Hi Maik, >>>> >>>> Raw SQL would certainly be the most efficient way. Even if you didn't >>> use >>>> raw SQL, though, how were you planning on dealing with Cayenne's >>> caches? I >>>> think this issue would, regardless of how you truncated the table. There >>>> are various options, I'm just trying to get a feel for your use can and >>>> thoughts. >>>> >>>> Thanks, >>>> >>>> mrg >>>> >>>> >>>> On Thu, Dec 21, 2017 at 5:10 AM, Musall, Maik <m...@selbstdenker.ag> >>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I have a lookup table with >400k rows that I want to periodically refill >>>>> from external sources. Since it also contains precomputed values that >>> are >>>>> not part of the external source, my plan is to read the external data >>> and >>>>> batch-insert it all into the table. >>>>> >>>>> How can I truncate the entire table to prepare it for new inserts? The >>>>> only thing that comes to mind is raw SQL, but that would obviously leave >>>>> stale data in Cayenne's various caches. >>>>> >>>>> Thanks >>>>> Maik >>>>> >>>>> >>> >>> >