Hi,
This would only impact new tables, existing tables would get their
chunk_length_in_kb from the existing schema. It's something we record in a
system table.
I have an implementation of a compact integer sequence that only requires 37%
of the memory required today. So we could do this with o
I think we do, implicitly, support precision and scale - only dynamically. The
precision and scale are defined by the value on insertion, i.e. those necessary
to represent it exactly. During arithmetic operations we currently truncate to
decimal128, but we can (and probably should) change this
Hi,
>From reading the spec. Precision is always implementation defined. The spec
>specifies scale in several cases, but never precision for any type or
>operation (addition/subtraction, multiplication, division).
So we don't implement anything remotely approaching precision and scale in CQL
wh
If it’s in the SQL spec, I’m fairly convinced. Thanks for digging this out
(and Mike for getting some empirical examples).
We still have to decide on the approximate data type to return; right now, we
have float+bigint=double, but float+int=float. I think this is fairly
inconsistent, and eith
Hi,
I agree with what's been said about expectations regarding expressions
involving floating point numbers. I think that if one of the inputs is
approximate then the result should be approximate.
One thing we could look at for inspiration is the SQL spec. Not to follow
dogmatically necessaril
Hi,
I'm not sure if I would prefer the Postgres way of doing things, which is
returning just about any type depending on the order of operators.
Considering it actually mentions in the docs that using numeric/decimal is
slow and also multiple times that floating points are inexact. So doing
some m
Hi,
Thanks for reporting this. I'll get this fixed today.
Ariel
On Fri, Oct 12, 2018, at 7:21 AM, Tommy Stendahl wrote:
> Hi,
>
> I tested to upgrade to Cassandra 4.0. I had an existing cluster with
> 3.0.15 and upgraded the first node but it fails to start due to a
> NullPointerException.
>
As far as I can tell we reached a relatively strong consensus that we should
implement lossless casts by default? Does anyone have anything more to add?
Looking at the emails, everyone who participated and expressed a preference was
in favour of the “Postgres approach” of upcasting to decimal f
Hi,
I tested to upgrade to Cassandra 4.0. I had an existing cluster with 3.0.15 and
upgraded the first node but it fails to start due to a NullPointerException.
The problem is the new table option "speculative_write_threshold", when it
doesn’t exist we get a NullPointerException.
I created a j
> On Oct 12, 2018, at 6:46 AM, Pavel Yaskevich wrote:
>
>> On Thu, Oct 11, 2018 at 4:31 PM Ben Bromhead wrote:
>>
>> This is something that's bugged me for ages, tbh the performance gain for
>> most use cases far outweighs the increase in memory usage and I would even
>> be in favor of chan
10 matches
Mail list logo