Hey everyone, I'd like to get your opinion on something. We had this issues regarding timezone handling which Vlad closed pretty quickly saying it isn't a Hibernate problem and I generally agree, but still would like to know what you think about it.
So basically what we do in the LocalDateJavaDescriptor(and also in some other places) is to use the value returned by java.sql.Date in some way and pack that into the target type. In this case this happens behind the scenes when invoking java.sql.Date.toLocalDate() but internally it just calls LocalDate.of(getYear() + 1900, getMonth() + 1, getDate()). Now IMO the problem really is in java.sql.Date because it does a timezone conversion. The user who created the issue HHH-11396 <https://hibernate.atlassian.net/browse/HHH-11396> pointed out he was using a DATE column type because he wanted to have a simple date i.e. year, month and date. When using java.sql.Date and the consumer is in a timezone after UTC i.e. UTC+1, the calculations of java.sql.Date will subtract that offset during /normalization/ and the millisecond value will change by that offset. This means that a date in the DBMS that is 2000-01-01 will become 1999-12-31 when the client is in UTC+1. One possible fix is to simply configure UTC for the consumer, then there will be no timezone shift. I think what java.sql.Date does is wrong because a date has no time part, so there shouldn't be any time shifts. We should workaround that by shifting the millisecond value back when constructing a LocalDate. What do you think should we do? Does anyone maybe know why java.sql.Date behaves that way? -- Mit freundlichen Grüßen, ------------------------------------------------------------------------ *Christian Beikov* _______________________________________________ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev