Re: [hibernate-dev] Resetting Lucene lock at Directory initialization
On the Lucene side, it seems to me that manually asking for a lock clear is cleaner / safer than this automagic approach. On 16 oct. 09, at 16:50, Sanne Grinovero wrote: > Hello, > Lucene does - in default LockManager implementation - a sort of "lock > cleanup" at index creation: if it detects a lock on the index at > startup, this is cleared. > > Łukasz translated the exact same semantic on the Infinispan > Directory; > current implementation inserts a "lock marker" at a conventional key, > like Infinispan was a filesystem. > So what is done in this case is to just delete the value from this > key, if any, at startup (for precision: at lockFactory.clearLock()). > > But in this situation I would need to "steal" the lock from another > node, if it exists. IMHO this Lucene approach is not considering > concurrent initializations of the FSDirectory. > So my doubts: > 1) Is there some API in Infinispan capable to invalidate an existing > Lock on a key, in case another node is still holding it (and will I > have the other node failing?) > 2) Does it make sense at all? looks like a bad practice to steal > stuff. > > I am considering to write this lock using SKIP_CACHE_STORE, in which > case I could assume that if there exists one, there's a good reason to > not delete the lock as other nodes are running and using the index. In > case all nodes are going down, the lock doesn't exists as it wasn't > stored. > > So my proposal is to do a no-op on lockFactory.clearLock(), and use > SKIP_CACHE_STORE when the lock is created. > > When an IndexWriter re-creates an index (basically making an existing > one empty) it first uses clearLock(), then it tries to acquire one, so > it looks like it should be safe. > > WDYT? this concept of SKIP_CACHE_STORE is totally new for me, maybe I > just misunderstood the usage. > > Regards, > Sanne > > ___ > hibernate-dev mailing list > hibernate-dev@lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev
Re: [hibernate-dev] core/trunk
Does that mean adding in a settings.xml profile? /System/Library/Frameworks/ JavaVM.framework/Versions/1.6.0/Home/ ? I will update https://www.hibernate.org/422.html once I know. On 16 oct. 09, at 17:12, Steve Ebersole wrote: > Just to let y'all know that as of now to build trunk you will need to > set a property named jdk16_home. The best option is to put this in > your > ~/.m2/settings.xml file. > > For details, see > http://opensource.atlassian.com/projects/hibernate/browse/HHH-4499 > > Thanks :) > > -- > Steve Ebersole > Hibernate.org > > ___ > hibernate-dev mailing list > hibernate-dev@lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev
Re: [hibernate-dev] core/trunk
Hi, yes, the property should be added to the profile you are activating when building Core. In the example on the wiki it should be within the 'standard-extra-repos' profile. My properties look like this: /opt/java/repository.jboss.org /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home It might be worth mentioning that you - at least for now - get away with just a jdk 5, by specifying the disableJDK6Modules properties: mvn clean install -DdisableJDK6Modules=true This will skip the building of the cache-infinispan and jdbc4-testing. --Hardy On Mon, 19 Oct 2009 09:20:30 +0200, Emmanuel Bernard wrote: > Does that mean adding in a settings.xml profile? > > /System/Library/Frameworks/ > JavaVM.framework/Versions/1.6.0/Home/ > > ? > > I will update https://www.hibernate.org/422.html once I know. > > On 16 oct. 09, at 17:12, Steve Ebersole wrote: > >> Just to let y'all know that as of now to build trunk you will need to >> set a property named jdk16_home. The best option is to put this in >> your >> ~/.m2/settings.xml file. >> >> For details, see >> http://opensource.atlassian.com/projects/hibernate/browse/HHH-4499 >> >> Thanks :) >> >> -- >> Steve Ebersole >> Hibernate.org >> >> ___ >> hibernate-dev mailing list >> hibernate-dev@lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > ___ > hibernate-dev mailing list > hibernate-dev@lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev
Re: [hibernate-dev] [infinispan-dev] Resetting Lucene lock at Directory initialization
On 16 Oct 2009, at 15:50, Sanne Grinovero wrote: > Hello, > Lucene does - in default LockManager implementation - a sort of "lock > cleanup" at index creation: if it detects a lock on the index at > startup, this is cleared. > > Łukasz translated the exact same semantic on the Infinispan > Directory; > current implementation inserts a "lock marker" at a conventional key, > like Infinispan was a filesystem. > So what is done in this case is to just delete the value from this > key, if any, at startup (for precision: at lockFactory.clearLock()). > > But in this situation I would need to "steal" the lock from another > node, if it exists. IMHO this Lucene approach is not considering > concurrent initializations of the FSDirectory. > So my doubts: > 1) Is there some API in Infinispan capable to invalidate an existing > Lock on a key, in case another node is still holding it (and will I > have the other node failing?) > 2) Does it make sense at all? looks like a bad practice to steal > stuff. > > I am considering to write this lock using SKIP_CACHE_STORE, in which > case I could assume that if there exists one, there's a good reason to > not delete the lock as other nodes are running and using the index. In > case all nodes are going down, the lock doesn't exists as it wasn't > stored. > > So my proposal is to do a no-op on lockFactory.clearLock(), and use > SKIP_CACHE_STORE when the lock is created. > > When an IndexWriter re-creates an index (basically making an existing > one empty) it first uses clearLock(), then it tries to acquire one, so > it looks like it should be safe. > > WDYT? this concept of SKIP_CACHE_STORE is totally new for me, maybe I > just misunderstood the usage. You could use SKIP_CACHE_STORE, but that only means the lock marker won't be loaded off disk or some other cache store. It could still be loaded from a neighbouring node. -- Manik Surtani ma...@jboss.org Lead, Infinispan Lead, JBoss Cache http://www.infinispan.org http://www.jbosscache.org ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev
Re: [hibernate-dev] Resetting Lucene lock at Directory initialization
Sanne, please don't use closed mailing lists, emails are bounced back and that's annoying. PS: removing source sense's ML On 16 oct. 09, at 16:50, Sanne Grinovero wrote: > Hello, > Lucene does - in default LockManager implementation - a sort of "lock > cleanup" at index creation: if it detects a lock on the index at > startup, this is cleared. > > Łukasz translated the exact same semantic on the Infinispan > Directory; > current implementation inserts a "lock marker" at a conventional key, > like Infinispan was a filesystem. > So what is done in this case is to just delete the value from this > key, if any, at startup (for precision: at lockFactory.clearLock()). > > But in this situation I would need to "steal" the lock from another > node, if it exists. IMHO this Lucene approach is not considering > concurrent initializations of the FSDirectory. > So my doubts: > 1) Is there some API in Infinispan capable to invalidate an existing > Lock on a key, in case another node is still holding it (and will I > have the other node failing?) > 2) Does it make sense at all? looks like a bad practice to steal > stuff. > > I am considering to write this lock using SKIP_CACHE_STORE, in which > case I could assume that if there exists one, there's a good reason to > not delete the lock as other nodes are running and using the index. In > case all nodes are going down, the lock doesn't exists as it wasn't > stored. > > So my proposal is to do a no-op on lockFactory.clearLock(), and use > SKIP_CACHE_STORE when the lock is created. > > When an IndexWriter re-creates an index (basically making an existing > one empty) it first uses clearLock(), then it tries to acquire one, so > it looks like it should be safe. > > WDYT? this concept of SKIP_CACHE_STORE is totally new for me, maybe I > just misunderstood the usage. > > Regards, > Sanne > > ___ > hibernate-dev mailing list > hibernate-dev@lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev
Re: [hibernate-dev] [infinispan-dev] Resetting Lucene lock at Directory initialization
On 19 Oct 2009, at 08:16, Emmanuel Bernard wrote: > On the Lucene side, it seems to me that manually asking for a lock > clear is cleaner / safer than this automagic approach. Yeah, I agree with Emmanuel - a more explicit form would work better IMO. Perhaps what you could do is something like this: 1) Create an entry, name "sharedlock", value "address of current lock owner". 2) Any time a node needs a lock, adds its addess to the "sharedlock" entry only if it doesn't exist (putIfAbsent) 3) If the entry exists , check if the address is still in the cluster (check using CacheManager.getMembers()). If the address doesn't exist (stale lock) remove and overwrite (using replace() to prevent concurrent overwrites) WDYT? - Manik -- Manik Surtani ma...@jboss.org Lead, Infinispan Lead, JBoss Cache http://www.infinispan.org http://www.jbosscache.org ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev
Re: [hibernate-dev] [infinispan-dev] Resetting Lucene lock at Directory initialization
Sorry I'll try to explain myself better, I think there's some confusion about what my problem is. Javadoc for LockFactory.clearLock - which the interface of what we have to implement - is about an explicit force-cleanup: /** * Attempt to clear (forcefully unlock and remove) the * specified lock. Only call this at a time when you are * certain this lock is no longer in use. * @param name name of the lock to be cleared. */ public void clearLock(String name) throws IOException { So yes I would agree in avoiding all "automagics" here and just remove the lock, but when IndexWriter opens the Directory in "create" mode it does: [...] if (create) { // Clear the write lock in case it's leftover: directory.clearLock(WRITE_LOCK_NAME); } Lock writeLock = directory.makeLock(WRITE_LOCK_NAME); if (!writeLock.obtain(writeLockTimeout)) // obtain write lock throw new LockObtainFailedException("Index locked for write: " + writeLock); this.writeLock = writeLock; // save it [...] basically "stealing" the lock ownership from existing running processes, if any, and then by using the stolen lock apply changes to the index. Apparently this was working fine in Lucene's Filesystem-based Directory, but this would fail on Infinispan as we are using transactions: concurrent access is guaranteed to happen on the same keys as the lock which is being ignored was meant to prevent this. And I'm happy for this to be illegal as the result would really be unpredictable :-) My understanding of the IndexWriter's code is that it uses this clearLock to make sure it's able to start even after a previous crash, so I'd like to implement the same functionality but need to detect if the left-over lock is really a left over and not a working lock from another node / IndexWriter instance. If the index is "live" it's fine for this IndexWriter to re-create it (making it empty) but it still should coordinate nicely with the other nodes. IMHO the IndexWriter wanting to do a cleanup should block until it properly gets the lock; as we get an eager lock on writeLock.obtain(writeLockTimeout) it means my implementation of clearLock() could be "no-op", provided we can guarantee to make a difference from a crash-leftover lock and an in-use lock. Manik you're idea is very interesting, but this lock is not shared: just one owner. I could store the single lock owner as you suggest, or is there some simpler way for this one-owner case? I understood that I can't use Infinispan's eager lock as this ownership is spanning multiple transactions; am I right on this? It would be quite useful if I could keep owning the lock even after the transaction commit, or have a separate TX running for the lock lifecycle, like a LockManager transaction, also because I expect Infinispan to clear this locks automatically in case the lock owner crashed/disconnects. thanks for all comments, Sanne 2009/10/19 Manik Surtani : > > On 19 Oct 2009, at 08:16, Emmanuel Bernard wrote: > >> On the Lucene side, it seems to me that manually asking for a lock >> clear is cleaner / safer than this automagic approach. > > Yeah, I agree with Emmanuel - a more explicit form would work better IMO. > Perhaps what you could do is something like this: > > 1) Create an entry, name "sharedlock", value "address of current lock > owner". > 2) Any time a node needs a lock, adds its addess to the "sharedlock" entry > only if it doesn't exist (putIfAbsent) > 3) If the entry exists , check if the address is still in the cluster > (check using CacheManager.getMembers()). If the address doesn't exist > (stale lock) remove and overwrite (using replace() to prevent concurrent > overwrites) > > WDYT? > > - Manik > > -- > Manik Surtani > ma...@jboss.org > Lead, Infinispan > Lead, JBoss Cache > http://www.infinispan.org > http://www.jbosscache.org > > > > > ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev
Re: [hibernate-dev] [infinispan-dev] Resetting Lucene lock at Directory initialization
I would think you need a separate tx for the lifecycle of the lock. On 19 Oct 2009, at 12:22, Sanne Grinovero wrote: > Sorry I'll try to explain myself better, I think there's some > confusion about what my problem is. > > Javadoc for LockFactory.clearLock - which the interface of what we > have to implement - is about an explicit force-cleanup: > /** > * Attempt to clear (forcefully unlock and remove) the > * specified lock. Only call this at a time when you are > * certain this lock is no longer in use. > * @param name name of the lock to be cleared. > */ > public void clearLock(String name) throws IOException { > > So yes I would agree in avoiding all "automagics" here and just remove > the lock, but when IndexWriter opens the Directory in "create" mode > it does: > > [...] > if (create) { > // Clear the write lock in case it's leftover: > directory.clearLock(WRITE_LOCK_NAME); >} > Lock writeLock = directory.makeLock(WRITE_LOCK_NAME); > if (!writeLock.obtain(writeLockTimeout)) // obtain write lock > throw new LockObtainFailedException("Index locked for write: " + > writeLock); >this.writeLock = writeLock; // save it > [...] > > basically "stealing" the lock ownership from existing running > processes, if any, and then by using the stolen lock apply changes to > the index. > Apparently this was working fine in Lucene's Filesystem-based > Directory, but this would fail on Infinispan as we are using > transactions: concurrent access is guaranteed to happen on the same > keys as the lock which is being ignored was meant to prevent this. And > I'm happy for this to be illegal as the result would really be > unpredictable :-) > > My understanding of the IndexWriter's code is that it uses this > clearLock to make sure it's able to start even after a previous crash, > so I'd like to implement the same functionality but need to detect if > the left-over lock is really a left over and not a working lock from > another node / IndexWriter instance. If the index is "live" it's fine > for this IndexWriter to re-create it (making it empty) but it still > should coordinate nicely with the other nodes. > IMHO the IndexWriter wanting to do a cleanup should block until it > properly gets the lock; as we get an eager lock on > writeLock.obtain(writeLockTimeout) it means my implementation of > clearLock() could be "no-op", provided we can guarantee to make a > difference from a crash-leftover lock and an in-use lock. > > Manik you're idea is very interesting, but this lock is not shared: > just one owner. I could store the single lock owner as you suggest, or > is there some simpler way for this one-owner case? I understood that I > can't use Infinispan's eager lock as this ownership is spanning > multiple transactions; am I right on this? It would be quite useful if > I could keep owning the lock even after the transaction commit, or > have a separate TX running for the lock lifecycle, like a LockManager > transaction, also because I expect Infinispan to clear this locks > automatically in case the lock owner crashed/disconnects. > > thanks for all comments, > Sanne > > > 2009/10/19 Manik Surtani : >> >> On 19 Oct 2009, at 08:16, Emmanuel Bernard wrote: >> >>> On the Lucene side, it seems to me that manually asking for a lock >>> clear is cleaner / safer than this automagic approach. >> >> Yeah, I agree with Emmanuel - a more explicit form would work >> better IMO. >> Perhaps what you could do is something like this: >> >> 1) Create an entry, name "sharedlock", value "address of current >> lock >> owner". >> 2) Any time a node needs a lock, adds its addess to the >> "sharedlock" entry >> only if it doesn't exist (putIfAbsent) >> 3) If the entry exists , check if the address is still in the >> cluster >> (check using CacheManager.getMembers()). If the address doesn't >> exist >> (stale lock) remove and overwrite (using replace() to prevent >> concurrent >> overwrites) >> >> WDYT? >> >> - Manik >> >> -- >> Manik Surtani >> ma...@jboss.org >> Lead, Infinispan >> Lead, JBoss Cache >> http://www.infinispan.org >> http://www.jbosscache.org >> >> >> >> >> -- Manik Surtani ma...@jboss.org Lead, Infinispan Lead, JBoss Cache http://www.infinispan.org http://www.jbosscache.org ___ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev