>> One specific question: Would there be a way for me to tell the Cayenne stack >> that it's world view is now essentially stale, dump all of the cache and >> start fresh? This would be a very acceptable workaround for many of the >> problem cases. > > I guess a more subtle solution would be to write queries using a combination > of query caching and prefetching. And use optimistic locking to catch > unforeseen changes on the commit end. Invalidating query cache will be the > equivalent of "start fresh". Such design should stay relevant even once your > second app goes away (and will also allow for clustering your new app).
Thanks Andrus, I think that's definitely a sensible way going forward (i.e. making informed decisions on caching at design/code level). As a stopgap measure I've disabled the shared object cache. It's an acceptable workaround within parameters of the application for now. Cheers, - hugi >> On Mar 7, 2019, at 4:00 PM, Hugi Thordarson <h...@karlmenn.is> wrote: >> >> Hi all. >> >> I'm currently in the process of rewriting an old java system, replacing it >> with a Cayenne-powered web app. While we're doing the rewrite, the two >> systems need to run side-by-side, with users using both equally. As you >> might have guessed, this is causing some real problems with stale/missing >> data in the Cayenne stack. >> >> I know I can perform explicit fetching with prefetches to refresh select >> data—but I have no idea when data might be stale, and I'd prefer not having >> to perform every fetch, every time, but rather just write this asuming it's >> a single, regular application (as it will eventually be). >> >> I'm basically looking for general advice; if anyone has experience and would >> like to share strategies or workarounds for a frequently changing DB. >> >> One specific question: Would there be a way for me to tell the Cayenne stack >> that it's world view is now essentially stale, dump all of the cache and >> start fresh? This would be a very acceptable workaround for many of the >> problem cases. >> >> Cheers, >> - hugi >