On Wed, 19 Dec 2012, Poul-Henning Kamp wrote:
--------
In message <20121220005706.i1...@besplex.bde.org>, Bruce Evans writes:
On Wed, 19 Dec 2012, Poul-Henning Kamp wrote:
Except that for absolute timescales, we're running out of the 32 bits
integer part.
Except 32 bit time_t works until 2106 if it is unsigned.
That's sort of not an option.
I think it is. It is just probably not necessary since 32-bit systems
will go away before 2038.
The real problem was that time_t was not defined as a floating
point number.
That would be convenient too, but bad for efficiency on some systems.
Kernels might not be able to use it, and then would have to use an
alternative representation, which they should have done all along.
[1] A good addition to C would be a general multi-word integer type
where you could ask for any int%d_t or uint%d_t you cared for, and
have the compiler DTRT. In difference from using a multiword-library,
this would still give these types their natural integer behaviour.
That would be convenient, but bad for efficiency if it were actually
used much.
You can say that about anything but CPU-native operations, and I doubt
it would be as inefficient as struct bintime, which does not have access
to the carry bit.
Yes, I would say that about non-native. It goes against the spirit of C.
OTOH, compilers are getting closer to giving full access to the carry
bit. I just checked what clang does in a home-made 128-bit add function:
% static void __noinline
% uadd(struct u *xup, struct u *yup)
% {
% unsigned long long t;
%
% t = xup->w[0] + yup->w[0];
% if (t < xup->w[0])
% xup->w[1]++;
% xup->w[0] = t;
% xup->w[1] += yup->w[1];
% }
%
% .align 16, 0x90
% .type uadd,@function
% uadd: # @uadd
% .cfi_startproc
% # BB#0: # %entry
% movq (%rdi), %rcx
% movq 8(%rdi), %rax
% addq (%rsi), %rcx
gcc generates an additional cmpq instruction here.
% jae .LBB2_2
clang uses the carry bit set by the first addition to avoid the comparison,
but still branches.
% # BB#1: # %if.then
% incq %rax
% movq %rax, 8(%rdi)
This adds 1 explicitly instead of using adcq, but this is the slow path.
% .LBB2_2: # %if.end
% movq %rcx, (%rdi)
% addq 8(%rsi), %rax
This is as efficient as possible except for the extra branch, and the
branch is almost perfectly predictable.
% movq %rax, 8(%rdi)
% ret
% .Ltmp22:
% .size uadd, .Ltmp22-uadd
% .cfi_endproc
Bruce
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"