There are many views on this problem. Let me borrow an example from Hibernate 
ORM:

Databases were be shared amongst multiple applications and clients (I’m aware 
that this pattern is discouraged for many reasons) on several of my earlier 
projects. A lot of these projects used Hibernate ORM to access the data. There 
was one master application that had the authority over several tables. Other 
clients than the master applications were able to perform modifications to data 
and they configured just a minimal subset of entities and columns, sometimes 
even different from the configuration of the master application.

That was required to ensure interoperability between applications and the 
database. I fully understand the weirdness of these scenarios, especially 
because in some cases there was no one who really knew how it worked in total. 
The openness in Hibernate allowed us to use Hibernate and still do our work 
because Hibernated did not break things. It went out of our way where we needed 
and served us where required. 

That’s also my way of thinking here. I’d be rather surprised if someone 
accesses data and because the framework takes ownership over particular aspects 
of data I’m no longer able to integrate with other clients by operating on the 
same data store.

I’ve the feeling that there might be the same number of good arguments for both 
strategies: Preserving the TTL or applying the configured TTL. 

Let’s diverge into TTL in MongoDB: TTL in MongoDB is handled by an index 
(expireAfterSeconds) and documents. The actual expiry is set within a document.

So maybe we could address the TTL issue in a different way that is more 
user-friendly and provide properties within an entity annotated with @TTL. I’m 
not familiar enough with the ORM and OGM core to estimate the complexity of 
such an feature. With @TTL properties we could give the user the control over 
the TTL. Then we just need to decide what to do if there’s no @TTL property 
inside an entity and we’re back at the question what to do in the default case.


> Am 27.06.2016 um 16:21 schrieb Guillaume Smet <guillaume.s...@gmail.com>:
> 
> Hi Mark!
> 
> Thanks for commenting on this, I was hoping for it.

:-)

> 
> While I can see the use case for sharing Redis data between 2 tools, I must 
> admit that I find it a bit weird to set the TTL on one tool and store the 
> entity on another one. It looks to me that if OGM stores the data, it has to 
> manage also the expiration.
> 
> Otherwise you store data from OGM which might expire in a few seconds if 
> you're not lucky!
> 
> Don't you agree?

That’s the nature of NoSQL data stores I guess. 


> 
> -- 
> Guillaume
> 
> On Mon, Jun 27, 2016 at 3:47 PM, Mark Paluch <mpal...@paluch.biz 
> <mailto:mpal...@paluch.biz>> wrote:
> 
> Hi Guillaume, 
> 
> TTL preservation behavior originates from Redis’ behavior and is to preserve 
> interoperability: 
> 
>> http://redis.io/commands/set <http://redis.io/commands/set>
>> Set key to hold the string value. [...] Any previous time to live associated 
>> with the key is discarded on successful SET operation.
> 
> 
> Keys written with SET loose their TTL value and the entry is persisted 
> without any further TTL. Reading and re-applying TTL is to preserve the 
> expiry.
> The general idea behind is to either apply the remaining TTL from the key, 
> because TTL is not configured in the entity model or to set the configured 
> TTL from the entity model.
> I see it from an integration-perspective in which Hibernate OGM and other 
> tools share Redis data and so you’re opting-in for features but things are 
> not broken.
> 
> Best regards, Mark
> 
> 
>> Am 27.06.2016 um 14:43 schrieb Guillaume Smet <guillaume.s...@gmail.com 
>> <mailto:guillaume.s...@gmail.com>>:
>> 
>> Hi,
>> 
>> So, I'm currently working on reducing the number of calls issued to Redis
>> in OGM as part of OGM-1064.
>> 
>> At the moment, we execute a call to Redis to get the TTL already configured
>> on an object before saving it. If the TTL is not explicitly configured with
>> @TTL, we set this TTL again after having stored this entity (see
>> RedisJsonDialect#storeEntity). Same for associations stored in a different
>> document.
>> 
>> In fact, this call returns the time remaining before expiration, not the
>> TTL previously configured,  so I find this behavior quite weird. Basically,
>> we store information which will expire sooner than expected. I can't really
>> get a use case for this and I don't think we should have an additional call
>> every time we store an object for a so obscure thing. Do we really expect
>> people to mess with TTLs of objects stored by OGM without relying on OGM
>> @TTL management?
>> 
>> IMHO, we should get rid of this call and only deal with TTL when it's
>> configured via the @TTL annotation.
>> 
>> Thoughts?
>> 
>> -- 
>> Guillaume
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev@lists.jboss.org <mailto:hibernate-dev@lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev 
>> <https://lists.jboss.org/mailman/listinfo/hibernate-dev>
> 
> 

_______________________________________________
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev

Reply via email to