I agree, that is exactly what I am trying to say: Yes, a Date is typically seen as representing the time interval of the whole day. But if we want to be able to compare it to DateTime objects (as was requested), can we define a time (because it won't evidently not work for intervals) in a general, meaningful, and least surprise way, so that the result of the comparison makes sense in some useful scenarios, without carrying too much danger of introducing erronous behavior ?

So:

1. Can it generally be done ?
2. And then, as always, "risk/reward" / "lifeness vs safeness" ?

Cheers,
mg


On 18/11/2021 22:25, OCsite wrote:
Guys,

I might have missed something of importance somewhere for I haven't read the full thread, so please ignore me if I am repeating something already throughly debated.

Nevertheless, based on the part I did read, it sort of looks like you perhaps might possibly have missed that any value with a limited precision can be, from the full-precision point of view, interpreted (at the very least) in two ways:

- either as an interval: 1776 means the whole year, 3 represents the whole interval <3,4) - or as a precise point, 1776 is 0:00.000 Jan 1 1776; 3 is just 3, any number of zero decimal places.

None of these interpretation is „right“ nor „wrong“. It depends on the particular application which of them (in which place) it decided to use. Based on which the further processing needs to exploit one or another approach.

All the best and sorry for the intrusion,
OC

On 18 Nov 2021, at 22:13, MG <mg...@arscreat.com> wrote:

 1. What I wrote ("A day, year, etc is evidently never equal to an
    actual point in time, since it is an interval. The question for
    me is: ...") is already the answer to your first sentence.
 2. Scale does matter here, so no, reasoning about years and days is
    not the same as days and seconds.
 3. I am not talking about all possible applications, see under
    "...should evidently go DateTime all the way".
 4. By your general logic, float should never be comparable to
    double, since from the view of double, float is an interval
    containing all the double precision ("digits") float is missing.
    Then BigDecimal shows up... :-)


On 18/11/2021 18:55, Alessio Stalla wrote:
Is the year 2001 "before" the date 2001-06-01? I'd say no, I'd say the year 2001 "contains" any date with year = 2001 so it cannot be logically "before" or "after" it. Suppose you're sorting people by birth date, and they can enter either the full date or just the year. How would you meaningfully compare someone who is born "in 2001" with someone who is born "in 2001-06-01"? It makes no sense to equate "born in 2001" with "born on Jan 1st, 2001". I'm using year and day because it's easier to reason about but it's the same for date and time. A date is an entire day, it's any point within that day. It's not equal to midnight, at least, not in general, not in all possible applications. Conflating the two just because it happens to work for some applications is just bad design imho.

On Thu, 18 Nov 2021 at 17:43, MG <mg...@arscreat.com> wrote:

    A day, year, etc is evidently never equal to an actual point in
    time, since it is an interval. The question for me is: Can we
    convert the Date to a DateTime so that it has an ordering which
    is helpful/meaningful in practice, without inviting unexpected
    bugs etc ?

    So what concrete scenario do you see where the implicit
    attaching of 00:00:00 to a Date for the sake of comparison:

     1.  Leads to a program error in a practical scenario (i.e. not
        constructed and not for applications which control their
        data types and should evidently go DateTime all the way) ?
     2. Leads to an unexpected result, i.e. "does not work for the
        developer or user" ?

    You might assume I am dead set on getting this into Groovy, but
    that is not the case. It is just that the counter arguments I
    have seen to this point seemed quite weak to me, so I have taken
    the position to argue for it (wich is the direction I am leaning
    to) - but convince me otherwise (saying "it is just wrong on
    principal" won't do that, though, unless I buy into your
    principle, which I oftend do not, since for me what is relevant
    is mostly whether it works in practice).

    Cheers,
    mg

    PS:  The "filling with zeroes" was a fluff comment - that's why
    it is in brackets and has an according smiley at the end ;-)



    On 18/11/2021 16:25, h...@abula.org wrote:
    Hi!

    Yes, I got that, but step 1 breaks it IMHO.

    It' just as wrong as assuming that a year is equivalent to New
    Year's Day that year (at midnight, even).

    Filling up with zeroes works when comparing integer numbers
    with real numbers, but that's about it.

    For one thing, the integer / real number comparison works both
    ways. The same cannot be said about LocalDateTime and LocalDate.

    Sorry...

    BR;H2

    Den 2021-11-18 16:01, skrev MG:
    1. Implicitly attach Time to Date
    2. Fill Time with zeroes
    3. There you go


    On 18/11/2021 15:45, h...@abula.org wrote:
    Re. 5:

    But there is nothing to fill up with zeroes...

    BR;H2

    Den 2021-11-18 15:11, skrev MG:
    I don't think that is correct: Time intervals for days, etc
    always
    need to be chosen so they are overlap free*, i.e.
    mathematically
    speaking the interval is closed on one end and open on the
    other, with
    the start of the next interval being the end of the last:
    [t0,t1[ , [t1,t2[ , ...

    For finite resolution (i.e. computers; assuming 3 didgits of
    millisecond precision) and the example of 1 day as interval
    length,
    this would mean that the interval of a day looks like:
    [date 00:00:00.000, date 23:59:59.999]
    or
    [date 00:00:00.000, date+1 00:00:00.000[

    To sum up:

    1. I have used the convention to chose the start of the
    interval to be
       closed, and the end to be open (i.e. t0 is in the
    interval, whereas
       t1 is not), which I have encountered time and time again,
    and
       therefore assume to be widely used.
    2. Using midnight of the following day only makes sense if
    you invert
       the open-closed end of the interval, which as I said to
    me is quite
       unusal.
    3. Using an application dependent time such as 21:00, 23:00,
    01:00,
       02:00 or 08:00 (because that is "when the backup runs or has
       finished") is certainly something which no one can expect
    to be the
       convention in a generally used language, and would imho be a
       terrible idea (apart from the fact that there is no
    concept on how
       to choose one over the other). It would also violate the
    sort order
       of Date with DateTime, in the most unexpected way.
    Applications that
       want/need that will have to use DateTime throughout.
    4. As I have said, the only other implicit time I would
    consider
       slightly viable is noon, but as far as least surprise,
    sort order
       behavior, etc goes, using the start of the day is imho
    the singular
       choice.
    5. (Using 00:00:00.000 also follows the time honored IT
    convention of
       "filling things up with zeroes", if not explicitly told
    differently
       ;-) )

    Cheers,
    mg

    *Otherwise a point in time could be in more than one
    interval (e.g.
    belong to more than one day).


    On 18/11/2021 14:22, Jochen Theodorou wrote:
    On 17.11.21 20:28, MG wrote:
    [...]
     3. I have never encountered any other assumption than the
    one that a
        Date carries the implicit time of midnight
    (00:00:00.000...). What
        other time would one logically pick, given that time
    intervals are
        by convention typically closed on the left and open on
    the right ?

    But you have here already trouble. you can take the start
    of the day, or
    the end of the day. both is midnight, both based on the
    same date, but
    they are basically 24h apart. In my last project we mostly
    used end of
    the day for example. And in some parts actually 2:00 in the
    morning,
    because it is the time to run after some processes... which
    did not
    proof to be a good idea btw.

    bye Jochen



Reply via email to