[hibernate-dev] 'INSERT' transactions are always rolled back on PostgreSQL when partitioning is used.

2007-10-23 Thread Julius Stroffek

Hi All,

I have created a simple java application running on glassfish and using 
a hibernate as a persistence provider on PostgreSQL. I just simply 
insert display and delete rows from a table.


Everything worked fine but when I use partitioning on tables as described at
http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html

The application started to report every insert to the table as 
"Transaction marked for rollback" and throwing an exception


javax.persistence.OptimisticLockException: 
org.hibernate.StaleStateException: Batch update returned unexpected row 
count from update [0]; actual row count: 0; expected: 1
   at 
org.hibernate.ejb.AbstractEntityManagerImpl.wrapStaleStateException(AbstractEntityManagerImpl.java:654)
   at 
org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:600)
   at 
org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:525)

    (plenty of more lines)
Caused by: org.hibernate.StaleStateException: Batch update returned 
unexpected row count from update [0]; actual row count: 0; expected: 1
   at 
org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:61)
   at 
org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:46)
   at 
org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:68)
   at 
org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48)
   at 
org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:246)
   at 
org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:237)
   at 
org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:141)
   at 
org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
   at 
org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)

   at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
   at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338)
   at 
org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:516)

   ... 50 more

This is due that Statement.executeBatch returns the incorrect number of 
rows affected by the transaction. This number returned by PostgreSQL is 
defined at

http://www.postgresql.org/docs/8.2/static/rules-status.html

Since the rule system in general is too complex in PostgreSQL it is not 
possible to define the command status string so it would contain the 
number of rows affected by the transaction. I think that JDBC driver 
should return Statement.SUCCESS_NO_INFO in this case, however, it is not 
doing so now and the JDBC spec does not allow to return 
Statement.SUCCESS_NO_INFO in Statement.executeUpdate.


I had a look at hibernate source where there are couple of 
"Expectations" implementations but I have not found how it is possible 
to choose different implementation in my application. A workaround for 
this could be that I can use Expectation.NONE as an "Expectations" 
instance. Is it possible to set this up in an application 
code/configuration? I have not found anything like this in documentation.


Thanks for your comments.

Cheers

Julo
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev


[hibernate-dev] HIbernate and 3.0 framework .Net

2007-10-23 Thread S P
Does someone have any idea/ opinion on Nhibernate compatibility with framework 
3.0 and/ or Visual Studio 2008. Any idea about nhibernate working with SQL 
Compact Edition.
   
  Thanks,

 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com ___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev


[hibernate-dev] Clustering and UpdateTimestampsCache

2007-10-23 Thread Brian Stansberry
Wanted to raise a point about about timestamps cache handling in case 
there's any desire to change the UpdateTimestampsCache API in 3.3.


AIUI, a goal of UpdateTimestampsCache is to ensure the cached timestamp 
never moves backward in time *except* when a caller that has set the 
timestamp to a far-in-the-future value in preInvalidate() later comes 
back and calls invalidate(), passing the current time.


There's a race in UpdateTimestampsCache where this could break under 
concurrent load.  For example, you could see:


(now = 0)
tx1 : preInvalidate(60);
(now = 1)
tx2 : preInvalidate(61);
tx1 : cache queryA w/ timestamp 1
tx1 : invalidate(1)
tx2 : update entity in a way that would query A results
tx2 : read queryA; check timestamp; 1 == 1 so passes. Wrong!

To deal with this, there are some comments in UpdateTimestampsCache 
about having preInvalidate() return some sort of Lock object, which 
would then be returned as a param to invalidate(). Idea here is to 
ensure that only the caller that most recently called preInvalidate is 
allowed to call invalidate.


That could work if the backing TimestampsRegion isn't clustered, but it 
doesn't address the fact that a clustered TimestampsRegion can be 
getting updates not only via the local UpdateTimestampsCache, but also 
asynchronously over the network.  If a clustered TimestampsRegion gets a 
replicated update that moves the timestamp back in time, it has no 
simple way to know if this is because 1) a peer that earlier replicated 
 a high preinvalidate value is now replicating a normal invalidate 
value or 2) an earlier change from peer A has arrived *after* a later 
change from peer B.


This could be addressed with a change to the TimestampsRegion API. 
Basically replace


public void put(Object key, Object value) throws CacheException;

with

public void preInvalidate(Object key, Object value) throws CacheException;
public void invalidate(Object key, Object value, Object 
preInvalidateValue) throws CacheException;


Basically the value that is passed to preInvalidate is also passed as a 
2nd param to invalidate.  This gives the TimestampsRegion the 
information it needs to properly track preinvalidations vs invalidations.


The UpdateTimestampsCache API is then changed to provide the caller with 
the timestamp in preInvalidate() and take it back in invalidate():



public synchronized Object preinvalidate(Serializable[] spaces) throws 
CacheException {

Long ts = new Long( region.nextTimestamp() + region.getTimeout() );
for ( int i=0; ipublic synchronized void invalidate(Serializable[] spaces, Object 
preInvalidateValue) throws CacheException {

Long ts = new Long( region.nextTimestamp() );
for ( int i=0; iThis is basically similar to the Lock concept in the 
UpdateTimestampsCache comments; but the control over the update is 
delegated to the TimestampsRegion.


The issue here is the UpdateTimestampsCache caller needs to hold onto 
the value returned by preInvalidate() and then pass it back.  Likely 
requires a change to Executable to provide a holder for it.


A change to the TimestampsRegion API has no benefit without a 
corresponding change in UpdateTimestampsCache and its caller.



--
Brian Stansberry
Lead, AS Clustering
JBoss, a division of Red Hat
[EMAIL PROTECTED]
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev